E. Liscio
14 records found
1
Human values are the abstract motivations that drive our opinions and actions. AI agents ought to align their behavior with our value preferences (the relative importance we ascribe to different values) to co-exist with us in our society. However, value preferences differ across
...
Large-scale survey tools enable the collection of citizen feedback in opinion corpora. Extracting the key arguments from a large and noisy set of opinions helps in understanding the opinions quickly and accurately. Fully automated methods can extract arguments but (1) require lar
...
We adopt an emerging and prominent vision of human-centred Artificial Intelligence that requires building trustworthy intelligent systems. Such systems should be capable of dealing with the challenges of an interconnected, globalised world by handling plurality and by abiding by
...
Values, such as freedom and safety, are the core motivations that guide us humans. A prerequisite for creating value-aligned multiagent systems that involve humans and artificial agents is value inference, the process of identifying values and reasoning about human value preferen
...
Value Inference in Sociotechnical Systems
Blue Sky Ideas Track
As artificial agents become increasingly embedded in our society, we must ensure that their behavior aligns with human values. Value alignment entails value inference, the process of identifying values and reasoning about how humans prioritize values. We introduce a holistic fram
...
We propose methods for an AI agent to estimate the value preferences of individuals in a hybrid participatory system, considering a setting where participants make choices and provide textual motivations for those choices. We focus on situations where there is a conflict between
...
HyEnA
A Hybrid Method for Extracting Arguments from Opinions
The key arguments underlying a large and noisy set of opinions help understand the opinions quickly and accurately. Fully automated methods can extract arguments but (1) require large labeled datasets and (2) work well for known viewpoints, but not for novel points of view. We pr
...
Moral values influence how we interpret and act upon the information we receive. Identifying human moral values is essential for artificially intelligent agents to co-exist with humans. Recent progress in natural language processing allows the identification of moral values in te
...
What values should an agent align with?
An empirical comparison of general and context-specific values
The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general values (e.g., Schwartz) that transcend contexts. However, context-specific values are necessary to (1) understand human decisions, and (2) engineer intelligent agents tha
...
Value alignment is a crucial aspect of ethical multiagent systems. An important step toward value alignment is identifying values specific to an application context. However, identifying contextspecific values is complex and cognitively demanding. To support this process, we deve
...
The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, context-specific values are necessary to (1) understand human decisions, and (2) engineer intelligent agents tha
...