Search This Blog

Deception Detection of Text, or Language or Word Choice in Texting.

Deception Detection of Text, Language and Word Choice in Texting.


The article below discusses some of the methods currently used and describes a new product for text analysis. Communication Experts call in content analysis or rhetorical analysis. Psychologists, law enforcement and the criminal justice system call it statement analysis or forensic statement analysis.
Patti Wood, MA, Certified Speaking Professional - The Body Language Expert. For more body language insights go to her website at http://PattiWood.net. Also check out the body language quiz on her YouTube Channel at http://youtube.com/user/bodylanguageexpert.
www.freshpatents.com/Method-and-system-for-the-automatic-recognition-of-deceptive-language-dt20070111ptan20070010993.php

Method and system for the automatic recognition of deceptive language

Abstract: A system for identifying deception within a text includes a processor for receiving and processing a text file. The processor includes a deception indicator tag analyzer for inserting into the text file at least one deception indicator tag that identifies a potentially deceptive word or phrase within the text file, and an interpreter for interpreting the at least one deception indicator tag to determine a distribution of potentially deceptive word or phrases within the text file and generating deception likelihood data based upon the density or distribution of potentially deceptive word or phrases within the text file. A method for identifying deception within a text includes the steps of receiving a first text to be analyzed, normalizing the first text to produce a normalized text, inserting into the normalized text at least one part-of-speech tag that identifies a part of speech of a word associated with the part-of-speech tag, inserting into the normalized text at least one syntactic label that identifies a linguistic construction of one or more words associated with the syntactic label, inserting into the normalized text at least one deception indicator tag that identifies a potentially deceptive word or phrase within the normalized text, interpreting the at least one deception indicator tag to determine a distribution of potentially deceptive word or phrases within the normalized text, and generating deception likelihood data based upon the density or frequency of distribution of potentially deceptive word or phrases within the normalized text. (end of abstract)
CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims priority from a U.S. provisional patent application Ser. No. 60/635,306, filed on Dec. 10, 2004, which is herein incorporated by reference in its entirety.

BACKGROUND OF THE INVENTION

[0002] This invention relates to the application of Natural Language Processing (NLP) to the detection of deception in written texts.

[0003] The critical assumption of all deception detection methods is that people who deceive undergo measurable changes--either physiological or behavioral. Language-based deception detection methods focus on behavioral factors. They have typically been investigated by research psychologists and law enforcement professionals working in an area described as "statement analysis" or "forensic statement analysis". The development of statement analysis techniques has taken place with little or no input from established language and speech technology communities.

[0004] The goal of these efforts has been twofold. Research projects, primarily conducted by experimental psychologists and management information systems groups, investigate the performance of human subjects in detecting deception in spoken and written accounts of a made up incident. Commercial and government (law enforcement) efforts are aimed at providing a technique that can be used to evaluate written and spoken statements by people suspected of involvement in a crime. In both cases, investigators look at a mix of factors, e.g. factual content, emotional state of the subject, pronoun use, extent of descriptive detail, coherence. Only some of these are linguistic. To date, the linguistic analysis of these approaches depends on overly simple language description and lacks sufficient formal detail to be automated--application of the proposed techniques depends largely on human judgment as to whether a particular linguistic feature is present or not. Moreover none of the proposed approaches bases its claims on examination of large text or speech corpora.

[0005] Two tests for measuring physiological changes are commercially available--polygraphs and computer voice stress analysis. Polygraph technology is the best established and most widely used. In most cases, the polygraph is used to measure hand sweating, blood pressure and respiratory rate in response to Yes/No questions posed by a polygraph expert. The technology is not appropriate for freely generated speech. Fluctuations in response are associated with emotional discomfort that may be caused by telling a lie. Polygraph testing is widely used in national security and law enforcement agencies but barred from many applications in the United States, including court evidence and pre-employment screening. Computer voice stress analysis (CVSA) measures fundamental frequency (FO) and amplitude values. It does not rely on Yes/No questions but can be used for the analysis of any utterance. The technology has been commercialized and several PC-based products are available. Two of the better known CVSA devices are the Diogenes Group's "Lantern" system and the Trustech "Vericator". CVSA devices have been adopted by some law enforcement agencies in an effort to use a technology that is less costly than polygraphs as well as having fewer detractors. Nonetheless, these devices do not seem to perform as well as polygraphs. The article Investigation and Evaluation of Voice Stress Analysis Technology (D. Haddad, S. Walter, R. Ratley and M. Smith, National Institute of Justice Final Report, Doc. #193832 (2002)) provides an evaluation of the two CVSA systems described above. The study cautions that even a slight degradation in recording quality can affect performance adversely. The experimental evidence presented indicates that the two CVSA products can successfully detect and measure stress but it is unclear as to whether the stress is related to deception. Hence their reliability for deception detection is still unproven.

[0006] Current commercial systems for detection of deceptive language require an individual to undergo extensive specialized training. They require special audio equipment and their application is labor-intensive. Automated systems that can identify and interpret deception cues are not commercially available.

BRIEF SUMMARY OF THE INVENTION

[0007] Motivated by the need for a testable and reliable method of identifying deceptive language, the present method detects deception by computer analysis of freely generated text. The method accepts transcribed or written statements and produces an analysis in which portions of the text are marked as highly likely to be deceptive or highly likely to be truthful. It provides for an automated system that can be used without special training or knowledge of linguistics.

[0008] A system for identifying deception within a text according to the present invention includes a processor for receiving and processing a text file, wherein the processor has a deception indicator tag analyzer for inserting into the text file deception indicator tags that identify potentially deceptive words and/or phrases within the text file. The processor also includes an interpreter for interpreting the deception indicator tags to determine a distribution of potentially deceptive word or phrases within the text file. The interpreter also generates deception likelihood data based upon the distribution of potentially deceptive word or phrases within the text file. The system may further include a display for displaying the deception likelihood data. The processor may further include a receiver for receiving a first text to be analyzed, a component for normalizing the first text to produce a normalized text, a component for inserting into the normalized text part-of-speech tags that identify parts of speech of word associated with the part-of-speech tags, and a component for inserting into the normalized text syntactic labels that identify linguistic constructions of one or more words associated with each syntactic label. The normalized text including the part-of-speech tag(s) and the syntactic label(s) is provided to the deception indicator tag analyzer.

[0009] In one embodiment of the system according to the present invention, the deception indicator tag analyzer inserts the deception indicator tag into the normalized text based upon words or phrases in the normalized text, part-of-speech tags inserted into the normalized text, and syntactic labels inserted in the normalized text. The deception indicator tags may be associated with a defined word or phrase or associated with a defined word or phrase when used in a defined linguistic context. Also, the interpreter may calculate a proximity metric for each word or phrase in the text file based upon the proximity of the word or phrase to a deception indicator tag such that the proximity metric is used to generate the deception likelihood data. The interpreter may also calculate a moving average metric for each word or phrase in the text file based upon the proximity metric of the word or phrase such that the moving average metric is used to generate the deception likelihood data. The calculation of the moving average metric for each word or phrase in the text file may be adjusted by a user of the system to alter the deception likelihood data as desired by the user.

[0010] A method for identifying deception within a text in accordance with the present invention includes the steps of: receiving a first text to be analyzed; normalizing the first text to produce a normalized text; inserting into the normalized text at least one part-of-speech tag that identifies a part of speech of the word associated with each part-of-speech tag; inserting into the normalized text at least one syntactic label that identifies a linguistic construction of one or more words associated with the syntactic label; inserting into the normalized text at least one deception indicator tag that identifies a potentially deceptive word or phrase within the normalized text, interpreting the at least one deception indicator tag to determine a distribution of potentially deceptive word or phrases within the normalized text; and generating deception likelihood data based upon the distribution of potentially deceptive words or phrases within the normalized text.

[0011] While multiple embodiments are disclosed, still other embodiments of the present invention will become apparent to those skilled in the art from the following detailed description, which shows and describes illustrative embodiments of the invention. As will be realized, the invention is capable of modifications in various obvious aspects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not restrictive.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] FIG. 1 is a schematic diagram of the components of a system for one embodiment of the invention.

[0013] FIG. 2 is a flowchart showing the overall processing of text in one embodiment of the invention.

[0014] FIG. 3 is a diagram showing how text is marked for display after analysis for deception.

[0015] FIG. 4 is a diagram showing an alternative for how text is marked for display after analysis for deception.

DETAILED DESCRIPTION

I. Overview

[0016] A core notion of the method is that deceptive statements incorporate linguistic attributes that are different from those of non-deceptive statements. It is possible to represent these attributes formally as a method of linguistic analysis that can be verified by empirical tests.

[0017] The method begins with certain widely accepted techniques of corpus linguistics and automated text analysis. The deception detection component is based on a corpus of "real world" texts, for example, statements and depositions from court proceedings and law enforcement sources which contain propositions that can be verified by external evidence. Linguistic analysis is accomplished by a combination of statistical methods and formal linguistic rules. A novel user interface interprets results of the analysis in a fashion that can be understood by a user with no specialized training.

[0018] A method in accordance with the present invention is implemented as an automated system that incorporates the linguistic analysis along with a method of interpreting the analysis for the benefit of a system user. A typical system user may be a lawyer, a law-enforcement professional, an intelligence analyst or any other person who wishes to determine whether a statement, deposition or document is deceptive. Unlike polygraph tests and similar devices that measure physiological responses to Yes/No questions, the method applies to freely generated text and does not require specialized or intrusive equipment. Thus it can be used in a variety of situations where statements of several sentences are produced.

[0019] The system builds on formal descriptions developed for linguistic theory and on techniques for automated text analysis developed by computational linguists. The analysis advances the state of the art in natural language processing, because deception detection is a novel application of NLP. In addition the system compensates for the inability of humans to recognize deceptive language at a rate little better than chance.

Smiling, Makes You Feel Good!

Smiling Makes You Feel Good!

Research on the positive effects of smiling.
ewscientist.com/article/mg15020279.300-act-now-think-later--fear-not-politicians-that-elusive-feelgood-factor-can-be-created-in-an-instant-just-appeal-to-our-primal-instincts-advises-david-concar.html Act now, think later - Fear not, politicians. That elusive feel-good factor can be created in an instant. Just appeal to our primal instincts, advises David Concar
27 April 1996
Magazine issue 2027. Subscribe and save

Department stores opt for nice smells and muzak; impresarios use warm-up acts. But psychologist Sheila Murphy has an infinitely more devious way of getting people in the right frame of mind. First she sits them in front of a screen in her lab at the University of Southern California in Los Angeles. Then she flashes up images of smiling faces.

Nothing obviously devious about that: smiles make people cheerful. The rub is that Murphy's smiles last for just a few thousandths of a second. That's way too fast for the human brain to know what it's looking at. And yet, according to in-depth studies carried out over many years by Murphy, veteran emotions researcher Robert Zajonc and their colleagues, these split-second flashes of teeth and warmly wrinkled eyes induce a measurably more positive frame of mind.

It sounds crazy. How can people respond to facial expressions too short-lived to permeate...?


Patti Wood, MA, Certified Speaking Professional - The Body Language Expert. For more body language insights go to her website at http://PattiWood.net. Also check out the body language quiz on her YouTube Channel at http://youtube.com/user/bodylanguageexpert.

Revealing Body Language


Patti Wood, body language expert, tells US Weekly that Letterman appears to be hiding something. Check the link below to find out why Patti arrived at this conclusion!

http://www.scribd.com/doc/37126521/Letterman-US-Weekly


Patti Wood, MA, Certified Speaking Professional - The Body Language Expert. For more body language insights go to her website at http://pattiwood.net/. Also check out the body language quiz on her YouTube Channel at http://youtube.com/user/bodylanguageexpert.

Body Language Tips For Job Interviews, Interviewing Tips


Body Language Tips for Job Interviews, Interviewing Tips
Interviewers go with their gut.

http://www.newscientist.com/article/mg20427312.500-cometowork-eyes-secrets-of-interview-success.html

Come-to-work eyes: Secrets of interview success

Continue reading page 1 2

ONE thing you can be sure of when you walk into an interview is that you're not there to be tested on what you know. The people sitting in front of you are already aware that when it comes to technical skills and qualifications, you tick all the boxes. What they're dying to find out is what you're like as a person - whether you'll fit in, whether they can trust you, how you're likely to behave at the office party. From now on, it's all about chemistry - or, more accurately, psychology.

So how do you give yourself the best chance of success? The most common piece of advice you'll get is to "be yourself". Forget that, it'll only help if you're the chief executive's cousin. A better strategy is to exploit the psychological shortcuts that interviewers unconsciously use when deciding whether or not they like someone - cues such as eye contact and body language. We all use them when meeting someone for the first time, and research shows that interviewers rely on these more than rational analysis when assessing a candidate.

We're not advocating wholesale deception, just a bit of fine-tuning to help pitch things in your favour...

First impressions count

When we meet someone for the first time, we make our minds up about various aspects of their personality almost instantaneously. We can't help ourselves. Janine Willis and Alexander Todorov at Princeton University found that showing people an unfamiliar face for just one-tenth of a second is long enough for them to form judgements about the person's attractiveness, likeability, trustworthiness, competence and aggressiveness. Having more time to deliberate doesn't change our opinions, it only increases our confidence in them (Psychological Science, vol 17, p 592).

No doubt there are good evolutionary reasons for this, though it's not clear how accurate such snap judgements areMovie Camera. Unfortunately, your interviewer is as likely to jump to quick conclusions as the rest of us. So although it may seem obvious, be sure to walk into that room looking upbeat and friendly.

And it's best to keep it up, at least for half a minute. Tricia Prickett, while at the University of Toledo in Ohio, found that untrained observers who watched a video of the first 20 to 30 seconds of a job interview were astonishingly accurate at predicting whether the applicant would be offered the job. That doesn't mean the observers were especially good at picking good candidates. It means the interviewers, despite being fully trained, still go with their initial gut instinct.

Can we change an interviewer's first impression? That's difficult, but not impossible, says Frank Bernieri, who studies personality perception at Oregon State University in Corvallis. Though it's easier to dislodge a positive impression than a negative one, he says. "Socially undesirable information, such as picking your nose or farting, tends to be weighted more in our assessments. What this means is that good impressions are always at risk of being trashed at any moment."

DO be prepared to turn on the charm right from the start

DON'T pick your nose. Bad first impressions are even harder to dislodge than bogies

Look fabulous

Attractive people make more money and go further in their careers because we are all biased towards beauty - unfortunate but true. This was shown by V. Bhaskar at University College London in a study of a Dutch TV show in which the highest-scoring player at the end of a round chooses which competitor to eliminate. He found that the least attractive players were twice as likely to be eliminated, despite scoring no worse than the others.

Attractive people make more money and go further because we're biased towards beauty - unfortunate but true

One reason for this is what's known as the halo effect: people assume that someone who scores highly in one character trait also scores highly in others. Social psychologist Richard Nisbett demonstrated that the thought process behind the halo effect is almost entirely subconscious (Journal of Personality and Social Psychology, vol 35, p 250). Use this to your advantage: most interviewers are mugs just like everyone else when it comes to the subtleties of social psychology.

DO make an effort: dress sharp and make sure you look your best

DON'T be tempted to test out the halo effect using your comic genius

Start with the handshake

Unless you plan on abseiling through the interviewer's window, shaking hands with them is probably the first opportunity you'll get to make an impression. Seize it. But not too hard. Give it a nice firm press, then some up and down movement.

That may sound disquietingly ritualistic, but several studies have found that people unconsciously equate a firm handshake with an extroverted, sociable personality - and that's more likely than a shy disposition to please an interviewer. What's more, a handshake can set the tone for the entire interview because it's one of the first nonverbal clues an applicant gives about their personality, says Greg Stewart at the University of Iowa in Iowa City, who last year tested the theory in mock interviews with 98 students. He found that those who had a firm handshake were more likely to be hired (Journal of Applied Psychology, vol 93, p 1139).

Looking for a job in science or technology? Take a look at the latest opportunities on Newscientistjobs.com.



Patti Wood, MA, Certified Speaking Professional - The Body Language Expert. For more body language insights go to her website at http://pattiwood.net/. Also check out the body language quiz on her YouTube Channel at http://youtube.com/user/bodylanguageexpert.

Mind Reading Computers, Computers that Read Facial Expressions

I have blogged before about the research on computers that read facial expressions, paralanguage and gestures. Here is research I am following at Cambridge.

http://www.cl.cam.ac.uk/research/rainbow/emotions/mind-reading.html
Automatic inference of complex mental states

Promotional material for the silent screen star Florence Lawrence displaying a range of emotions
People express their mental states, including emotions, thoughts, and desires, all the time through facial expressions, vocal nuances and gestures. This is true even when they are interacting with machines. Our mental states shape the decisions that we make, govern how we communicate with others, and affect our performance. The ability to attribute mental states to others from their behaviour, and to use that knowledge to guide our own actions and predict those of others is known as theory of mind or mind-reading. It has recently gained attention with the growing number of people with Autism Spectrum Conditions, who have difficulties mind-reading.

Existing human-computer interfaces are mind-blind — oblivious to the user’s mental states and intentions. A computer may wait indefinitely for input from a user who is no longer there, or decide to do irrelevant tasks while a user is frantically working towards an imminent deadline. As a result, existing computer technologies often frustrate the user, have little persuasive power and cannot initiate interactions with the user. Even if they do take the initiative, like the now retired Microsoft Paperclip, they are often misguided and irrelevant, and simply frustrate the user. With the increasing complexity of computer technologies and the ubiquity of mobile and wearable devices, there is a need for machines that are aware of the user’s mental state and that adaptively respond to these mental states.

A computational model of mind-reading
Drawing inspiration from psychology, computer vision and machine learning, our team in the Computer Laboratory at the University of Cambridge has developed mind-reading machines — computers that implement a computational model of mind-reading to infer mental states of people from their facial signals. The goal is to enhance human-computer interaction through empathic responses, to improve the productivity of the user and to enable applications to initiate interactions with and on behalf of the user, without waiting for explicit input from that user. There are difficult challenges:

It involves uncertainty, since a person’s mental state can only be inferred indirectly by analyzing the behaviour of that person. Even people are not perfect at reading the minds of others.
Automatic analysis of the face from video is still an area of active research in its own right.
There is no ‘code-book’ to interpret facial expressions as corresponding mental states.

Processing stages in the mind-reading system
Using a digital video camera, the mind-reading computer system analyzes a person’s facial expressions in real time and infers that person’s underlying mental state, such as whether he or she is agreeing or disagreeing, interested or bored, thinking or confused. The system is informed by the latest developments in the theory of mind-reading by Professor Simon Baron-Cohen, who leads the Autism Research Centre at Cambridge.

Prior knowledge of how particular mental states are expressed in the face is combined with analysis of facial expressions and head gestures occurring in real time. The model represents these at different granularities, starting with face and head movements and building those in time and in space to form a clearer model of what mental state is being represented. Software from Nevenvision identifies 24 feature points on the face and tracks them in real time. Movement, shape and colour are then analyzed to identify gestures like a smile or eyebrows being raised. Combinations of these occurring over time indicate mental states. For example, a combination of a head nod, with a smile and eyebrows raised might mean interest. The relationship between observable head and facial displays and the corresponding hidden mental states over time is modelled using Dynamic Bayesian Networks.

Results

Images from the Mind-reading DVD
The system was trained using 100 8-second video clips of actors expressing particular emotions from the Mind Reading DVD, an interactive computer-based guide to reading emotions. The resulting analysis is right 90% of the time when the clips are of actors and 65% of the time when shown video clips of non-actors. The system’s performance was as good as the top 6% of people in a panel of 20 who were asked to label the same set of videos.

Previous computer programs have detected the six basic emotional states of happiness, sadness, anger, fear, surprise and disgust. This system recognizes complex states that are more useful because they come up more frequently in interactions. However, they are also harder to detect because they are conveyed in a sequence of movements rather than a single expression. Most other systems assume a direct mapping between facial expressions and emotion, but our system interprets the facial and head gestures in the context of the person’s most recent mental state, so the same facial expression may imply different mental states in diffrent contexts.

Current projects and future work

Monitoring a car driver
The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands. Imagine a future where we are surrounded with mobile phones, cars and online services that can read our minds and react to our moods. How would that change our use of technology and our lives? We are working with a major car manufacturer to implement this system in cars to detect driver mental states such as drowsiness, distraction and anger.

Current projects in Cambridge are considering further inputs such as body posture and gestures to improve the inference. We can then use the same models to control the animation of cartoon avatars. We are also looking at the use of mind-reading to support on-line shopping and learning systems. The mind-reading computer system may also be used to monitor and suggest improvements in human-human interaction. The Affective Computing Group at the MIT Media Laboratory is developing an emotional-social intelligence prosthesis that explores new technologies to augment and improve people’s social interactions and communication skills.

We are also exploring the ethical implications and privacy issues raised by this research. Do we want machines to watch us and understand our emotions? Mind-reading machines will undoubtedly raise the complexity of human-computer interaction to include concepts such as exaggeration, disguise and deception that were previously limited to communications between people.

Further projects and links
Demonstrations of the system with volunteers at the CVPR Conference in 2004
Royal Society 2006 Summer Science Exhibition (including video)
Affective computing group at MIT
Autism Research Centre at the University of Cambridge
The mind-reading DVD


Patti Wood, MA, Certified Speaking Professional - The Body Language Expert. For more body language insights go to her website at http://PattiWood.net. Also check out the body language quiz on her YouTube Channel at http://youtube.com/user/bodylanguageexpert.