There are Benefits to an Artificial-Authentic Human Partnership
The aftermath of every mass shooting involves retrospective analysis of red flags. What was missed? How could the violence have been averted? What should we have seen? As this writer discusses in a prior article, artificial intelligence (AI) can be used to help threat assessors detect dangerous persons.
But how good is it, and do the potential risks outweigh the benefits?
Multitasking: AI Versus Humans
AI can monitor public areas for threats better than humans who are challenged to multitask over extended periods of time, are vulnerable to cognitive fatigue, or are physically unable to monitor multiple television screens simultaneously.
Staffing issues exacerbate these human concerns.
True, the use of AI to monitor potential threats raises issues of privacy and potential bias, but using human threat assessors to screen for potentially dangerous people involves some of the same concerns.
Even something as basic as facial recognition, which is currently in widespread use, has the potential to make biased assessments — whether performed by a person or a computer program — or both, considering that humans do the computer programming.
Another issue involves the extent to which AI can detect suspicious activity:
- How does it know what to look for?
- Would an activity be classified as suspicious merely because of the race, gender, or apparent religious affiliation of the actor?
Lacking human intuition and instinct, AI’s decision-making ability in this area again depends on its programming.
In fact, some companies that already use AI to detect weapons through "inference algorithms," raise concerns of racial profiling or targeting people legally able to carry guns.
And generally, weapon-detection systems may be less effective when scanning a crowd of people as opposed to one-by-one such as occurs at a security checkpoint, not to mention associated costs and other practical considerations installing such technology.
Reviewing AI information may also be useful in identifying suspects and motives for violence.
In the aftermath of tragedy, some research found that after mass shootings, participants sought out affection through chatbots in order to help them cope with stress and negative emotions.
But because the goal is prevention, the question remains, how can we use AI to avert disaster in the first place?
The Value of Human Observation and Intervention
Preventing a mass shooting requires more than computerized analysis; it involves the observations of the people who are in the best position to notice red flags in terms of negative affect, expressed grievances, and behavioral changes.
In this sense, averting a mass shooting requires knowledge and experience that AI doesn’t have — in terms of close personal acquaintance with an individual in crisis.
Colleagues and coworkers are in a good position to notice concerning behavior, and close friends and family members can compare it to a baseline — which may include how the suspect has dealt with stress, trauma, anger, or grievances in the past.
Taken together, early observations can avert disaster through effective intervention. The key is prompting those around the suspect to speak up.
As we continue to examine the interplay between artificial intelligence and human judgment, wisdom, and knowledge, we continue to brainstorm — in every sense of the word, ways to work together to prevent violence before it occurs.
The preceding article was originally published in Psychology Today, and is used with the permission of its author.
Wendy L. Patrick, JD, MDiv, Ph.D., is an award-winning career trial attorney and media commentator. She is host of "Live with Dr. Wendy" on KCBQ, and a daily guest on other media outlets, delivering a lively mix of flash, substance, and style. Read Dr. Wendy L. Patrick's Reports — More Here.
© 2024 Newsmax. All rights reserved.