Close Menu
Gossips Today
  • Tech & Innovation
  • Healthcare
  • Personal Finance
  • Lifestyle
  • Travel
  • Business
  • Recipes
What's Hot

This Shoe Brand Is Known to Have a Waitlist of 100K+ Shoppers—and It Just Dropped the Perfect Summer Sneakers

4 principles for using AI to spot abuse—without making it worse

Police shut down Cluely’s party, the ‘cheat at everything’ startup

Facebook X (Twitter) Instagram
Wednesday, June 18
Gossips Today
Facebook X (Twitter) Instagram
  • Tech & Innovation

    Police shut down Cluely’s party, the ‘cheat at everything’ startup

    June 18, 2025

    Adobe’s Firefly comes to iOS and Android

    June 17, 2025

    Finland warms up the world’s largest sand battery, and the economics look appealing

    June 17, 2025

    WhatsApp is adding ads to the Status screen

    June 16, 2025

    The U.S. Navy is more aggressively telling startups, ‘We want you’

    June 16, 2025
  • Healthcare

    GOP Medicaid cuts would cause thousands of preventable deaths: study

    June 18, 2025

    Could the FDA take an indirect approach to regulate LDTs?

    June 17, 2025

    Amazon restructures healthcare business | Healthcare Dive

    June 17, 2025

    Eliminate obstacles to delivering patient care

    June 16, 2025

    Cigna launches new generative AI assistant for members

    June 16, 2025
  • Personal Finance

    16 Budgeting Tips to Manage Your Money Better

    May 28, 2025

    How to Stick to a Budget

    May 20, 2025

    4 Steps to Navigate Marriage and Debt

    May 11, 2025

    Buying a Fixer-Upper Home: What to Know

    May 10, 2025

    How to Talk to Your Spouse About Money

    May 10, 2025
  • Lifestyle

    Why Your Closet Feels Full But Putting Outfits Together Is Still Annoying AF

    June 17, 2025

    Halfway Through the Year. This Is the Pivot Point

    June 12, 2025

    16 Father’s Day Gift Ideas He (or You) Will Love

    June 4, 2025

    The Getup: Sand

    May 25, 2025

    Your Summer Style Starts Here: 17 Memorial Day Sale Picks to Grab Now + 4 Getups

    May 24, 2025
  • Travel

    This Shoe Brand Is Known to Have a Waitlist of 100K+ Shoppers—and It Just Dropped the Perfect Summer Sneakers

    June 18, 2025

    This Is the Best Hiking City in the U.S.—and No, It’s Not Denver or Portland

    June 17, 2025

    This North Carolina Island Is Home to a National Seashore, Quiet Beach Villages, and the Tallest Lighthouse in the U.S.

    June 17, 2025

    L.A. Travelers Can Now Skip Airport Traffic Thanks to This New Metro Transit Center

    June 16, 2025

    One of the Best Way to See Romania Is by Cycling Through Medieval Villages, Castles, and Vineyards

    June 16, 2025
  • Business

    4 principles for using AI to spot abuse—without making it worse

    June 18, 2025

    Reid Hoffman on Musk vs. Trump and the real AI threat to jobs

    June 17, 2025

    How AI tools collect your data across devices—and how to be selective about what you share

    June 17, 2025

    At Home closing? Bankruptcy puts dozens of stores at risk of shutting down

    June 16, 2025

    Block’s CFO explains Gen Z’s surprising approach to money management

    June 16, 2025
  • Recipes

    slushy paper plane

    June 6, 2025

    one-pan ditalini and peas

    May 29, 2025

    eggs florentine

    May 20, 2025

    challah french toast

    May 6, 2025

    charred salt and vinegar cabbage

    April 25, 2025
Gossips Today
  • Tech & Innovation
  • Healthcare
  • Personal Finance
  • Lifestyle
  • Travel
  • Business
  • Recipes
Business & Entrepreneurship

4 principles for using AI to spot abuse—without making it worse

gossipstodayBy gossipstodayJune 18, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
4 principles for using ai to spot abuse—without making it worse
Share
Facebook Twitter LinkedIn Pinterest Email

Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people—including children in foster care, adults in nursing homes, and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs.

Developers are using natural language processing, for example—a form of AI that interprets written or spoken language—to try to detect patterns of threats, manipulation, and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most “at risk” for abuse.

When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier.

But as a social worker with 15 years of experience researching family violence—and five years on the front lines as a foster-care case manager, child abuse investigator, and early childhood coordinator—I’ve seen how well-intentioned systems often fail the very people they are meant to protect.

Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements—not faces or voices—to detect physical violence. I’m grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm?

New tech, old injustice

Many AI tools are trained to “learn” by analyzing historical data. But history is full of inequality, bias, and flawed assumptions. So are people, who design, test, and fund AI.

That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families’ risk levels—scores given to hotline staff to help them screen calls—would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%.

Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as “aggressive” at a significantly higher rate than Standard American English—up to 62% more often, in certain contexts.

Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress.

These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems—sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation, and family separation compared with white families, even after accounting for income and other socioeconomic factors.

Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers.

Surveillance over support

Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost.

In hospitals and eldercare facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors, and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy.

In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months—overwhelming staff and missing at least one real incident. The program’s accuracy did “not achieve a level that would be considered acceptable to staff and management,” according to the independent report.

Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian, and Securly are marketed as tools to keep students safe. Such programs can be installed on students’ devices to monitor online activity and flag anything concerning.

But they’ve also been shown to flag harmless behaviors—like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality.

Other systems use classroom cameras and microphones to detect “aggression.” But they frequently misidentify normal behavior like laughing, coughing, or roughhousing—sometimes prompting intervention or discipline.

These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans—data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in Automating Inequality, AI systems risk scaling up these long-standing harms.

Care, not punishment

I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I’ve developed a framework of four key principles for what I call “trauma-responsive AI.”

Survivor control: People should have a say in how, when, and if they’re monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help.

Human oversight: Studies show that combining social workers’ expertise with AI support improves fairness and reduces child maltreatment—as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate.

Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM’s AI Fairness 360, Google’s What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models.

Privacy by design: Technology should be built to protect people’s dignity. Open-source tools like Amnesia, Google’s differential privacy library, and Microsoft’s SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people’s identities in video or photo data.

Honoring these principles means building systems that respond with care, not punishment.

Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development—from needs assessments to user testing and ethical oversight.

Legislation is important, too. On May 5, 2025, for example, Montana’s governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.

As I tell my students, innovative interventions should disrupt cycles of harm, not perpetuate them. AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it.

Aislinn Conrad is an associate professor of social work at the University of Iowa.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

making principles spotabusewithout worse
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticlePolice shut down Cluely’s party, the ‘cheat at everything’ startup
Next Article This Shoe Brand Is Known to Have a Waitlist of 100K+ Shoppers—and It Just Dropped the Perfect Summer Sneakers
admin
gossipstoday
  • Website

Related Posts

Reid Hoffman on Musk vs. Trump and the real AI threat to jobs

June 17, 2025

How AI tools collect your data across devices—and how to be selective about what you share

June 17, 2025

At Home closing? Bankruptcy puts dozens of stores at risk of shutting down

June 16, 2025
Leave A Reply Cancel Reply

Demo
Trending Now

10 Best Places to Live in North Carolina, According to Local Real Estate Experts

This Shoe Brand Is Known to Have a Waitlist of 100K+ Shoppers—and It Just Dropped the Perfect Summer Sneakers

4 principles for using AI to spot abuse—without making it worse

Police shut down Cluely’s party, the ‘cheat at everything’ startup

Latest Posts

This Shoe Brand Is Known to Have a Waitlist of 100K+ Shoppers—and It Just Dropped the Perfect Summer Sneakers

June 18, 2025

4 principles for using AI to spot abuse—without making it worse

June 18, 2025

Police shut down Cluely’s party, the ‘cheat at everything’ startup

June 18, 2025

Subscribe to News

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Advertisement
Demo
Black And Beige Minimalist Elegant Cosmetics Logo (4) (1)
Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

Categories

  • Tech & Innovation
  • Health & Wellness
  • Personal Finance
  • Lifestyle & Productivity

Company

  • About Us
  • Contact Us
  • Advertise With Us

Services

  • Privacy Policy
  • Terms & Conditions
  • Disclaimer

Subscribe to Updates

© 2025 Gossips Today. All Right Reserved.

Type above and press Enter to search. Press Esc to cancel.