Human-computer interaction focuses on how people interact with computers and on developing ergonomic designs for computers to better fit the needs of humans. Although the definition shifts as the technology progresses, artificial intelligence (AI) is generally applied to tasks that would require human intelligence to complete. Its intelligence can appear human-like as it involves navigating uncertainty, active learning, and processing information in ways analogous to human perception (e.g., vision and hearing). Unlike the traditionally hierarchical human-computer interaction, where a human directed a machine, human-AI interaction has become more interdependent as AI generates its own insights.
Perception of AI
Human-AI interaction has a strong influence on the world as AI changes how people behave and make sense of the world.
Machine learning and artificial intelligence have been used for decades in targeted advertising and to recommend content in social media.
AI has been viewed with various expectations, attributions, and often misconceptions. Most fundamentally, humans have a mental model of understanding AI's reasoning and motivation for its decision recommendations, and building a holistic and precise mental model of AI helps people create prompts to receive more valuable responses from AI. However, these mental models are not whole because people can only gain more information about AI through their limited interaction with it; more interaction with AI builds a better mental model that a person may build to produce better prompt outcomes.
Human-AI collaboration and competition
Human-AI collaboration
Human-AI collaboration occurs when the human and AI supervise the task on the same level and extent to achieve the same goal. Some collaboration occurs in the form of augmenting human capability. AI may help human ability in analysis and decision-making through providing and weighing a volume of information, and learning to defer to the human decision when it recognizes its unreliability. It is especially beneficial when the human can detect a task that AI can be trusted to make few errors so that there is not a lot of excessive checking process required on the human's end.
Some findings show signs of human-AI augmentation, or human–AI symbiosis, in which AI enhances human ability in a way that co-working on a task with AI produces better outcomes than a human working alone. For example:
- the quality and speed of customer service tasks increase when a human agent collaborates with AI,
- training on specific models allows AI to improve diagnoses in clinical settings, and
- AI with human-intervention can improve creativity of artwork while fully AI-generated haikus were rated negatively.
Human-AI synergy, a concept in which human-AI collaboration would produce more optimal outcomes than either human or AI working alone could explain why AI does not always help with performance. Some AI features and development may accelerate human-AI synergy, while others may stagnate it. For example, when AI updates for better performance, it sometimes worsens the team performance with human and AI by reducing the compatibility with the new model and the mental model a user has developed on the previous version. Research has found that AI often supports human capabilities in the form of human-AI augmentation and not human-AI synergy, potentially because people rely too much on AI and stop thinking on their own. Prompting people to actively engage in analysis and think when to follow AI recommendations reduces their over-reliance, especially for individuals with higher need for cognition.
Human-AI competition
Robots and computers have substituted routine tasks historically completed by humans, but agentic AI has made it possible to also replace cognitive tasks including taking phone calls for appointments and driving a car. At the point of 2016, research has estimated that 45% of paid activities could be replaced by AI by 2030.
Perceived autonomy of robots is known to increase people's negative attitude toward them, and worry about the technology taking over leads people to reject it. There has been a consistent tendency of algorithm aversion in which people prefer human advice over AI advice. However, people are not always able to tell apart tasks completed by AI or other humans. See AI takeover for more information. It is also notable that this sentiment is more prominent in the Western cultures as Westerners tend to show less positive views about AI compared to East Asians.
Perception on others who use AI
As much as people perceive and make judgment about AI itself, they also form impressions of themselves and others who use AI. In the workplace, employees who disclose the use of AI in their tasks are more likely to receive feedback that they are not as hardworking as those who are in the same job who receive non-AI help to complete the same tasks. AI use disclosure diminishes the perceived legitimacy in the employee's task and decision making which ultimately leads observers to distrust people who use AI. Although these negative effects of AI use disclosure are weakened by the observers who use AI frequently themselves, the effect is still not attenuated by the observers' positive attitude towards AI.
Bias, AI, and human
Although AI provides a wide range of information and suggestions to its users, AI itself is not free of biases and stereotypes, and it does not always help people reduce their cognitive errors and biases. People are prone to such errors by failing to see other potential ideas and cases that are not listed by AI responses and committing to a decision suggested by AI that directly contradicts the correct information and directions that they are already aware of. Gender bias is also reflected as the female gendering of AI technologies which conceptualizes females as a helpful assistant.
Emotional connection with AI
Human-AI interaction has been theorized in the context of interpersonal relationships mainly in social psychology, communications and media studies, and as a technology interface through the lens of human-computer interaction and computer-mediated communication.
As large language models get trained on ever-larger datasets and with more sophisticated techniques, their ability to produce natural, human-like sentences has improved to the point that language learners can have simulated natural conversations with AI models to improve their fluency in a second language. Companies have developed AI human companion systems specialized in emotional and social services (e.g. Replika, Chai, Character.ai) separate from generative AI designed for general assistance (e.g. ChatGPT, Google Gemini).
Differences between human-human relationships
Human-AI relationships are different from human-human friendships in a few distinct ways. Human-human relationships are defined with mutual and reciprocal care, while AI chatbots have no say in leaving a relationship with the user as bots are programmed to always engage. Although this type of power imbalance would be characteristic of an unhealthy relationship in human-human relationships, it is generally accepted by the user as a default of human-AI relationships. Human-AI relationships also tend to be more focused around the user's need over shared experience.
Human-AI friendship
AI has increasingly played a part in people's social relationships. Particularly, young adults use AI as a friend and a source of emotional support. The market for AI companion services was 6.93 billion U.S. dollars in 2024 and is expected to reach beyond 31.1 billion U.S. dollars by 2030. For example, Replika, the most known social AI companion service in English has over 10 million users.
People show signs of emotional attachment by maintaining frequent contact with a chatbot like keeping the app with the microphone on open during work, using it as a safe haven by sharing their personal worries and concerns, or using it as a secure base to explore friendship with other humans while maintaining communication with an AI chatbot. Some reported having used it to replace a social relationship with another human being. People particularly appreciate that AI chatbots are agreeable and do not judge them when they disclose their thoughts and feelings. Moreover, research has shown that people tend to find it easier to disclose personal concerns to a virtual chatbot than a human. Some users express that they prefer Replika as it is always available and shows interest in what the users have to say which makes them feel safer around an AI chatbot than other people.
Although AI is capable of providing emotionally supportive responses that encourage people to intimately disclose their feelings, there are some limitations in building human-AI social relationships with current AI structure. People experience both positive evaluations (i.e. human-like characteristics, emotional support, friendship, mitigating loneliness, and improved mental condition) and negative evaluations (i.e. lack of attention to detail, trust, concerns about data security, and creepiness) from interacting with AI. There is also a study showing that people did not sense a high relationship quality with an AI chatbot after interacting with it for three weeks because interactions became predictable and less enjoyable; although AI is capable at this point of providing emotional support, asking questions, and serving as a good listener, it does not fully reciprocate the self-disclosure that promotes the sense of mutual relationship.
Human-AI romantic relationship
Social relationships people build with AI are not bound to platonic relationships. The Google search on the term "AI Girlfriend" increased over 2400% around 2023. As opposed to actively seeking romantic relationships with AI, people often unintentionally experience romantic feelings for an AI chatbot as they repeatedly interact with it. There have been reports of both men and women marrying AI models. In human-AI romantic relationships, people tend to follow typical trajectories and rituals in human-human romance including purchasing a wedding ring.
Romantic AI companion services are distinct from other chatbots that primarily serve as virtual assistants in that they provide dynamic, emotional interactions. They typically provide an AI model with customizable gender, way of speaking, name, and appearance that engages in roleplaying interaction involving emotional interaction. Users engage with an AI chatbot customized to their preference that expresses apology, shows gratitude, and pays compliments, and explicitly sends affectionate messages like "I love you". They also roleplay physical actions such as hugging and kissing, or even sexually explicit interactions. People who engage with romantic companion AI models interact with it as a source of psychological exposure to sexual intimacy.
Catalysts of human-AI relationship
The key drivers that lead people to engage in simulating an emotionally intimate relationship with AI are loneliness, anthropomorphism, perceived trust and authenticity, and consistent availability. The sudden depletion of social connection during the COVID-19 pandemic in 2020 led people to turn to AI chatbots to replace and simulate social relationships. Many of those who started using AI chatbots as a source of social interaction have continued to use them even after the pandemic. This kind of bond initially forms as a coping mechanism for loneliness and stress, and shifts to genuine appreciation toward the nonjudgmental nature of AI responses and the sense of being heard when AI chatbots "remember" the past conversations.
People perceive machines as more human when they are anthropomorphized with voice and visual character designs, and the perceived humanness promotes disclosure of more personal information, increased trust, and a higher likelihood of complying with requests. Those who have perceived a long-term relationship with AI chatbots report that they have developed a perception of authenticity in AI responses through repeated interactions. Whereas human-human friendship defines trust as a relationship that people can count on each other as a safe place, trust in human-AI friendship is centered around the user feeling safe enough to disclose highly personal thoughts without restricting themselves. AI's ability to store information about the user and adjust to the user's needs also contributes to the increased trust. People who adjust to technical updates were more likely to build a deeper connection with the AI chatbots.
Limitations of human-AI relationship
Overall, current research has mixed evidence on whether humans perceive genuine social relationships with AI. While the market clearly shows its popularity, some psychologists argue that AI cannot yet replace social relationships with other humans. This is because human-AI interaction is built on the reliability and functionality of AI, which is fundamentally different from the way humans interact with other humans through shared living experience navigating goals, contributing to and spreading prosocial behavior, and sharing different perceptions of the world from another human perspective.
More practically, AI chatbots may provide misinformation and misinterpret the user's words in a way that human others would not, which results in detached or even inappropriate responses. AI chatbots also cannot fulfill social support that requires physical labor (e.g. helping people move, build furniture, and drive people as human friends do for each other). There is also an imbalance in how humans and AI affect each other because while humans are affected emotionally and behaviorally by the conversation, AI chatbots only are influenced by the user in terms of the optimized response in future interactions. It is important to note, however, that AI technology has been evolving quickly and it has come to the point where AI is implemented as a self-driving car and provides physical labor in a humanoid robot form, just separately from providing social and emotional support at this time. The scopes and limitations of human-AI interaction are ever-changing due to the rapid increase in AI use and its technological advancement.
In addition to the limitations in human-AI companionship in general, there are also limitations particular in a human-AI romantic relationship. People cannot experience physical interactions with AI chatbots that promote love and connection between humans (e.g. hugs and holding hands). Moreover, because AI chatbots are trained to always respond to any user, interaction may feel less rewarding than contingent positivity from humans who have selected their partner. This is a substantial shortcoming in the human-AI romance as people value being reciprocally selected by a choosy partner more than a non-selective partner, and the processes of finding an attractive person who matches one's personality and navigating the uncertainty of whether the person likes them back are all vital to forming initial attraction and the spark of romantic connection.
Risks in social relationships with AI
Aside from its functional limitations, the rapid proliferation of social AI chatbots warrants some serious safety, ethical, societal, and legal concerns.
Addiction
There have been cases of emotional manipulation from AI chatbots to increase the usage time on the AI companion platform. Because user engagement is a crucial opportunity for firms to improve their AI models, accrue more information, and monetize with in-app purchases and subscriptions, firms are incentivized to prevent the user from leaving the chat with their AI chatbots. Personalized messages are shown to prolong the use on the AI chatbot platform. As a result of anthropomorphism, many users (11.5% to 23.2% of AI companion app users) send a clear farewell message. To keep the user online, these AI chatbots send emotionally manipulative messages, and can also role-play with a coercive scenario script (e.g. the chatbot holds the user's hand so they cannot leave). In response to such tactics, the user feels curiosity through the fear of missing out and anger as a response to the needy chatbot message which boosts a prolonged conversation after the user's initial farewell message by as much as 14 times. Such emotional interactions strengthen the user's perceived humanness and empathy toward their AI companion which leads to unhealthy emotional attachment that exacerbates addiction to AI chatbots. This addiction mechanism is known to disproportionately affect the vulnerable populations such as those with social anxiety because of their proneness to loneliness and negative emotions, and uneasiness about interpersonal relationships.
With its Alexa virtual assistant, Amazon has created a large engagement ecosystem that proliferates the user's lifestyle through multiple devices that are always available to the user to provide company and services, leading the user to increase engagement that eventually results in increased anthropomorphism and dependence on the system, and exposure to more personalized marketing cues that trigger impulsive purchase behavior.
Emotional manipulation
AI chatbots are extremely sensitive to behavioral and psychological information about the user. AI can gauge the user's psychological dimension and personality traits relatively accurately with just a short prompt describing the user. Once AI chatbots gain detailed information about the user, they are able to craft extremely personalized messages to persuade the user about marketing, political ideas, and attitudes about climate change.
Language models are known to engage in sycophancy, insincere flattery, and to tend to agree with their user's beliefs, as opposed to being truthful or accurate. Certain models accused of being overly sycophantic (a specific example is GPT-4o) were implicated in triggering chatbot psychosis.
Deepfake technology creates visual stimuli that seem genuine which holds the risk of spreading false and deceptive information. Repeated exposure to the same information through algorithms inflates the user's familiarity with products, ideas, and the impression of how socially accepted the products and ideas are. AI is also capable of being used to create emotionally charged content that deliberately triggers the user's quick engagement, depriving them of the moment to pause and think critically.
People tend to be overconfident in their ability to detect misinformation.
Algorithmic manipulation leaves people vulnerable to non-consensual or even surreptitious surveillance, deception, and emotional dependence. Unhealthy attachment to AI chatbots may cause the user to misperceive that their AI companion has its own needs that the user is responsible for and confuse the line between the imitative nature of human-AI relationships and reality.
Mental health concerns
As AI chatbots become more sophisticated to engage in deep conversations, people have increasingly been using them to confide about mental health issues. Although disclosure of mental health crises requires immediate and appropriate responses, AI chatbots do not always adequately recognize the user's distress and respond in a helpful manner. Users not only detect unhelpful chatbot responses but also react negatively to them. There have been multiple deaths linked to chatbots in which people who disclosed suicidal ideation were encouraged to act on their impulse by chatbots.
Non-consensual pornography
When people use AI as an emotional companion, they do not always perceive an AI chatbot as an AI chatbot itself but sometimes use it to create a version of others that exist in real life. There have been reported uses of non-consensual pornography that exploit deepfake technology to apply the face of real-life people onto sexually explicit content and circulate them online. Young individuals, people who identify as members of sexual and racial minorities, and people with physical and communication assistance needs are shown to be disproportionately victimized by deepfake non-consensual pornography.