top of page

Quantifying User Psychology with Large Language Models

More and more of Receptiviti's customers are using Large Language Models (LLMs) and seeking to enhance these models' understanding of user psychology. When LLMs improve their grasp of user psychology, the quality of their responses increases, hallucinations decrease, and overall safety is enhanced. In this post, we will explain how LLM developers can measure user psychology by integrating Receptiviti into an API-based retrieval system.


Large Language Models fall short in comprehending the nuances of psychology

 

While Large Language Models have an impressive capacity for language generation and understanding, their comprehension of human psychology remains simplistic and far more limited than their language skills would lead you to believe. While LLMs can imitate empathy by responding to explicit emotional cues within a prompt, they can't pick up on important psychological and cognitive signals in human language when cues are less obvious.

 

Without genuine psychological understanding, LLM responses may lack the depth and nuance found in true psychological insights. This limitation hinders the effectiveness of LLMs in scenarios that require even moderately advanced psychological comprehension, such as sensitive customer support interactions, professional development and coaching, therapeutic interactions, or counselling.  LLMs’ lack of psychological insight can lead them to overgeneralize, resulting in generic, boilerplate responses that aren’t sensitive to psychological cues in the person’s writing. At best, that leads to unengaged, canned-sounding responses. At worst, a user may receive an incorrect or biased response that causes them serious harm.

 

Retrieval Augmented Generation (RAG) has emerged as an approach for enhancing an LLM by integrating external knowledge sources to surpass what is possible with its training data. RAG involves combining a generative model, capable of creating new content like GPT, with a real-time retrieval model that selects and incorporates relevant information from a predefined set of documents or knowledge sources, enabling the generation process to be guided by external context for improved relevance and coherence. This approach improves the output by using existing knowledge and context. It combines the strengths of both generative and retrieval-based models in natural language processing tasks. If you'd like to delve into RAGs in more detail, here's an excellent overview from Stackoverflow.

 

The Receptiviti-powered RAG process can unfold as follows: First, a query or context can be presented to the retrieval model, which in this case is Receptiviti’s language-based psychology insights API. The API scores the content on one or more of 200 psychological, cognitive, and emotional dimensions, which are pre-selected according to the specific use case. The resulting scores can be used to refine the initial query with additional psychological, cognitive, and emotional information about the user. These insights can then be fed into a generative model, which uses the psychological context to create highly relevant output that is tailored to the psychology of the user. This iterative interplay between retrieval and generation can enhance the quality and relevance of the final output, offering a more synergistic approach that combines the common knowledge and conversational skills of generative models with the grounded and validated psychological insights provided by the Receptiviti API.

 

LLMs convincingly imitate empathy when prompts contain overt emotion.

Generating a contextually relevant and emotionally appropriate response depends on the LLM's ability to not only comprehend the literal meaning of what it's being asked but also to understand contextual nuances embedded in the language. In emotionally charged scenarios, understanding how a question is asked becomes critical. The LLM must recognize and empathize with the user's emotion and integrate mimicry to craft a response that not only ensures factual accuracy but also mirrors an appropriate emotional tone. Consider a user who is expressing frustration in an emotionally charged scenario:

 

User Input: "I feel overwhelmed by work, and nothing seems to be going right. It's just one of those days."

 

In this example, the non-Receptiviti powered LLM recognizes the overt emotional tone and distressing situation apparent from the user's statement, acknowledges the feelings of overwhelm, and responds with empathy and support. The language used reflects an understanding of the overt emotion expressed within the prompt, going beyond providing fact-based information to address the user's emotional state through the emotional language contained in the prompt:

 

LLM Response: "I'm truly sorry to hear that you're feeling overwhelmed. It's completely normal to have challenging days. Remember, it's okay to take a step back, maybe take a short break to recharge. You're not alone in feeling this way. If there's anything specific on your mind or if you just need someone to talk to, I'm here for support."

 

Mimicry is not psychological understanding

 

The inherent limitations of LLMs become apparent in scenarios where subtle emotional or psychological cues require a nuanced interpretation. In applications that require basic responses to prompts that contain overt emotions, superficially empathetic LLM responses may be sufficient. However, When emotional signals are subtle, LLMs may struggle to make the right inferences, even with plenty of relevant information in their training data. For instance, despite the abundance of literature on narcissism included in the training data of most LLMs, these systems’ knowledge of how narcissism manifests in language is often based on folk psychology or commonly held incorrect beliefs and associations in the training set.


In test results, GPT typically equates narcissism with the use of high rates of first-person singular pronouns. However, most psychological research indicates that narcissists are no more likely to use first-person singular pronouns than people who are not narcissists. In fact, research shows that narcissists typically use higher rates of second-person pronouns and swear words, and use less language associated with anxiety, fear, tentativeness, and sensory processes. High rates of first-person singular pronouns, on the other hand, are positively correlated with very different psychological phenomena like depressive symptoms and neuroticism. Such gaps in understanding often lead LLMs to hallucinate, which can negatively impact the user experience or lead to undesirable or potentially dangerous results.

 

Infusing psychological understanding into Large Language Models with Receptiviti-powered Retrieval Augmented Generation

 

Clearly, human psychology is shaped by factors that are challenging to encapsulate in an LLM training dataset. A more sophisticated and accurate comprehension of psychological phenomena requires the LLM to decipher implicit cues that infer the psychological and cognitive processes embedded in prompt language, rather than mimicking obvious cues. Retrieval Augmented Generation (RAG) can help by augmenting an LLM with external knowledge sources, enabling the model with capabilities that exceed its initial training data.

 

A RAG that utilizes Receptiviti’s language-based psychology insights API can further enhance an LLM’s ability to infer a wide variety of psychological insights about a user from implicit cues contained within their prompt language, and in doing so, enable the LLM to generate responses that consider a user’s psychology and cognitive processes.

 

Assessing an individual’s thinking style from their Large Language Model prompts

 

For example, a Receptiviti API-powered RAG can offer an LLM an enhanced understanding of how users process information and solve problems, thereby helping LLMs to adapt their responses to align with users’ mental frameworks and problem-solving strategies. Consider how an LLM’s response to the question “what is an LLM?” can be tailored based on a Receptiviti-enabled RAG’s quantification of how analytical the user’s thinking style is:


Receptiviti-powered RAG automatically tailors LLM responses based on how analytical the user is.
Receptiviti-powered RAG automatically tailors LLM responses based on how analytical the user is.

Users whose language the RAG deems to be low on the Receptiviti “analytical” dimension are likely to benefit from responses that are concise, engaging, and use illustrative examples to aide with comprehension:

 

Receptiviti-enabled RAG response tailored to a less analytical user (Analytical score of 38.9): “An LLM is short for Large Language Model. It's like a smart computer program that understands and generates text. You throw words at it, and it gives you back more words that make sense. People use it for things like making chatbots or writing articles without having a person do the writing. It's basically a program that talks and writes like a person.”

 

Users whose language the RAG deems to be high on the Receptiviti “analytical” dimension would result in detailed responses supported with more facts and data:

 

Receptiviti-enabled RAG response tailored to a highly analytical user (Analytical score of 92.1): “An LLM, or Large Language Model, is a sophisticated artificial intelligence system designed for natural language processing tasks. These models, exemplified by architectures like GPT-4, are built on deep learning techniques and trained on extensive datasets containing diverse language patterns. LLMs excel in understanding context, semantics, and linguistic nuances. They consist of multiple layers of neural networks, allowing them to capture intricate language structures and generate human-like text. LLMs find applications in various fields, such as language translation, text completion, question answering, and content creation, showcasing their ability to comprehend and generate coherent and contextually relevant responses.”

 

Assessing personality from users' Large Language Model prompts

 

A Receptiviti API-powered RAG could also be used to assess a user’s personality based on the language contained in their prompts. These quantitative scores can be used for a wide variety of purposes including informing market research with personality insights, the development of customer personas, or to develop a comprehensive and dynamic understanding of the user such that future responses are tailored to the nuances of their unique personality:


Receptiviti-powered RAG builds user personality profiles by analyzing their LLM prompts.
Receptiviti-powered RAG builds user personality profiles by analyzing their LLM prompts.

 

For the purposes of improving LLM-based digital health or coaching platforms, a similar approach could be used to track how changes in a user’s personality manifest over time. Big Five personality facets such as Stress-Prone or Anxiety-Prone can be used to quantify how stressed or anxious an individual is at a point in time or can be used to track changes in a user’s stress or anxiety levels over time. The resulting scores can be used to better understand how the individual responds to therapeutic interventions and to create far more empathetic and contextually suitable responses.

 

Comprehensive understanding of user psychology, personality, and emotions with Receptiviti-powered Retrieval Augmented Generation

 

The degree to which a user is stressed, anxious, or has an analytical thinking style are just a few of the hundreds of psychological phenomena that could be inferred with a Receptiviti API-powered RAG. Prompt language can be analyzed to understand a user’s emotions, levels of anxiety, empathy, their leadership style, and to conduct a comprehensive assessment of their personality using frameworks like the Big Five and DISC.

 

A Receptiviti API-powered RAG can generate quantitative scores that can be used to compare individuals or to conduct comparisons to understand how an individual changes over time.

 

LLMs are designed for natural language understanding and generation, and while they excel at processing and generating human-like text based on the input they receive, generating quantitative insights into phenomena like psychology, personality, and emotion requires integrating external knowledge that has been specifically designed for the complexities of language-based psychological analysis, like Receptiviti.

 

Augmenting Large Language Models (LLMs) with a comprehensive understanding of human psychology brings several advantages:

  • It creates interactions that are more meaningful and impactful. These interactions are finely tuned to the unique psychological makeup of each user.

  • It reduces the likelihood of LLMs generating responses that might be biased or lack sensitivity.

  • It equips LLMs with the capability to track and understand a user’s psychological changes over time.

From a user experience perspective, an empirical understanding of users' cognitive and psychological processes enhances the overall user experience, and makes interactions with LLMs more intuitive, adaptive, and far more impactful.

 

Contact us to learn more or to get started with an API account.

Trusted by industry leaders:

Subscribe to the blog

bottom of page