Call for Expression of Interest (EOI) - A research study in Designing Humane AI Solutions   AAIH President to deliver Keynote Address on Gen AI at the 20 th ASEAN Ministerial Meeting on June 7th.  AAIH President, Dr. Anton Ravindran, and AAIH Founding member & Fellow Prof Liz Bacon have been invited to speak at the MENA ICT Forum 2023 which will be held at the Dead Sea Jordan on November 20th and 21st 2024 under the patronage of His Majesty King Abdullah II. Dr. Anton Ravindran has been an invited speaker previously at the MENA ICT Forum in 2022, 2020 and 2018.
Futuristic AI robot at a digital crossroads in a smart city, choosing between two symbolic paths—one illuminated with green-blue light and icons of fairness and transparency, the other glowing red with symbols of bias, privacy breach, and opacity, while a human figure observes from the side

Agentic AI and Ethics – Navigating the Agentic AI Frontier 

Dr. Anton Ravindran

Author: “Will AI Dictate the Future ?
Futuristic AI robot at a digital crossroads in a smart city, choosing between two symbolic paths—one illuminated with green-blue light and icons of fairness and transparency, the other glowing red with symbols of bias, privacy breach, and opacity, while a human figure observes from the side

The contents presented here are based on information provided by the authors and are intended for general informational purposes only. AAIH does not guarantee the accuracy, completeness, or reliability of the information. Views and opinions expressed are those of the authors and do not necessarily reflect our position or opinions. AAIH assumes no responsibility or liability for any errors or omissions in the content.

Today, I want to take us on a journey into the world of agentic AI artificial intelligence that doesn’t just react but acts, learns, and makes decisions with a degree of autonomy. It’s a thrilling frontier, one that promises to reshape industries, societies, and even our daily lives. But with great power comes great responsibility, and nowhere is this more evident than in the ethical questions surrounding agentic AI. At the heart of these questions lies data—the lifeblood of AI systems. How we collect it, use it, and protect it will determine whether agentic AI becomes a force for good or a source of harm. Let’s explore this together. 

First, what do we mean by “agentic AI”? Unlike traditional AI, which follows pre- programmed rules or responds to specific inputs, agentic AI has agency. It can set goals, adapt to new situations, and act independently to achieve outcomes. Think of a virtual assistant that doesn’t just schedule your meetings but anticipates your needs, negotiates on your behalf, or even manages your finances. Or imagine AI systems in healthcare deciding treatment plans based on real-time patient data. This shift from passive tools to active agents is revolutionary—but it’s also where ethics comes into play. 

Data is the foundation of agentic AI. These systems don’t magically “think” on their own; they rely on vast amounts of information—your search history, medical records, social media activity, even the way you move through a city tracked by sensors. In 2023 alone, it’s estimated that 328 million terabytes of data were generated daily worldwide, according to industry reports. Agentic AI thrives on this deluge, using it to learn, predict, and act. But here’s the catch: the quality, source, and handling of that data raise profound ethical challenges. 

Let’s start with consent. When you sign up for an app or service, you often click “agree” to terms that allow your data to be collected. But do you really know what you’re agreeing to? Studies show that only 1% of users read these agreements in full—and why would they, when they’re deliberately long and complex? Agentic AI, trained on this data, might make decisions about you—like whether you’re a good candidate for a loan or a job—without you ever knowing how or why. This lack of transparency erodes trust. If an AI denies you a mortgage, shouldn’t you know what data it used and how it weighed it? 

Bias is another ethical minefield, deeply tied to data. Agentic AI learns from the world as it is, not as we wish it to be. If historical data reflects societal inequalities—say, fewer women in STEM fields or racial disparities in hiring—the AI might perpetuate those biases. A 2021 study by the AI Now Institute found that facial recognition systems, often powered by agentic algorithms, misidentified people of color at rates up to 34% higher than white individuals. Why? Because the training data was skewed toward lighter skin tones. When AI acts autonomously on such flawed foundations, it doesn’t just mirror injustice—it amplifies it. 

Privacy, too, hangs in the balance. Agentic AI’s ability to act independently means it can sift through your data in ways you never anticipated. Consider health data: a 2024 report from the World Economic Forum projected that by 2030, 80% of healthcare decisions could involve AI. An agentic system might analyze your wearable device data, spot a pattern, and recommend a drug—all without human oversight. But what if that data is sold to insurers who raise your premiums? Or hacked by bad actors? The more autonomous AI becomes, the harder it is to control where data flows. 

So, how do we address these challenges? Ethics demands we build guardrails—principles to ensure agentic AI serves humanity rather than subverts it. As explored in our article on ethical alignment in AI, achieving this balance begins with addressing the alignment problem at its core. Let’s look at three key areas: accountability, fairness, and transparency, all rooted in how we handle data

First, accountability. If an agentic AI makes a harmful decision—like a self-driving car causing an accident—who’s responsible? The developer? The company? The AI itself? Current laws lag behind technology, leaving gaps. We need frameworks that trace decisions back to human oversight, ensuring data inputs and AI outputs can be audited. The European Union’s AI Act, proposed in 2021 and evolving since, is a step in this direction, mandating risk assessments for high-stakes AI systems. Data provenance—knowing where it came from and how it was processed—becomes critical here. 

Second, fairness. To combat bias, we must diversify the data that feeds agentic AI. This means intentional efforts to include underrepresented voices and correct historical imbalances. It’s not enough to collect more data; we need better data. Researchers at MIT found in 2022 that retraining AI models with balanced datasets reduced error rates in medical diagnostics by 15%. Fairness also requires diverse teams building these systems—because the people who design AI shape how it interprets the world. 

Third, transparency. Users deserve to know when agentic AI is at work and how it uses their data. This could mean simple disclosures—like a label saying, “This decision was made by an AI based on X data”—or deeper access to the logic behind outcomes. Transparency builds trust, but it’s tricky. Companies argue that revealing algorithms harms competitiveness, and overly complex explanations might confuse users. Still, a middle ground exists: give people enough to understand without drowning them in technicalities. 

Let me paint a hopeful picture. Imagine an agentic AI in education, tailoring lessons to each student’s learning style, using data from their progress to unlock potential. Or in climate science, autonomously modeling solutions to cut emissions, drawing on global sensor networks. These futures are possible—but only if we get the ethics right. Data isn’t just fuel; it’s a moral compass. How we steward it determines the path AI takes. 

In conclusion, agentic AI is not a distant sci-fi dream—it’s here, growing more capable every day. Its presence in our lives will only deepen. But with autonomy comes accountability, and with data comes duty. We must ask: Who controls the data that trains AI? Who benefits from the decisions? And how do we ensure no one is left behind? The answers lie not just in code but in our values. Let’s build an ethical Agentic AI that harnesses technology’s power while honoring the humanity it serves. 

2 Replies to “Agentic AI and Ethics – Navigating the Agentic AI Frontier”

  1. Thank you for this timely and thought-provoking piece, Dr. Ravindran. The shift toward agentic AI brings both immense opportunity and urgent ethical responsibility. Your emphasis on data as the moral compass of these systems is especially powerful.

    To build on your insights, I wonder:
    • How might we design agentic AI systems that still allow for meaningful human-in-the-loop oversight, especially in high-stakes domains?
    • What frameworks can help us evaluate not just immediate risks, but the long-term societal impacts of increasingly autonomous systems?
    • And as these systems become more embedded in daily life, how can we equip the public with the digital literacy needed to engage with agentic AI in an informed and empowered way?

    Your call to steward data with care is exactly the kind of thinking we need as we shape the future of AI. Looking forward to continuing this important conversation.

    1. Hello Azita, Thank you for your thoughtful and insightful response! I’m happy to continue this conversation and explore the crucial questions you’ve raised.

      Regarding human-in-the-loop oversight, I believe we need to design agentic AI systems with transparency, explainability, and accountability in mind. This could involve implementing techniques like model interpretability, value alignment, and robust testing protocols to ensure that AI decision-making processes are understandable and auditable. In high-stakes domains, human oversight could be facilitated through mechanisms like review boards, audit trails, or even AI-assisted decision-support systems that provide nuanced recommendations rather than definitive answers.

      To evaluate long-term societal impacts, we might draw upon frameworks like the Ethics of Artificial Intelligence framework proposed by UNESCO, or the AI Now Institute’s Socio-Technical Auditing approach. These frameworks encourage us to consider the broader social, economic, and environmental implications of AI systems and to prioritize human-centered design principles.

      “AI literacy & AI fluency” is indeed essential for empowering the public to engage with agentic AI in an informed way. This could involve initiatives like AI education programs, public outreach and engagement, and stakeholder-driven design processes that prioritize inclusivity and accessibility. By fostering a deeper understanding of AI’s potential benefits and risks, we can build a more informed and critically thinking public that is better equipped to navigate the complexities of an AI-driven world.

      Once again, thank you for your thought-provoking questions and for contributing to this critical conversation! I look forward to seeing you in Singapore during the AI & Humanity Summit in Nov.

Leave a Reply

Your email address will not be published.

You may use these <abbr title="HyperText Markup Language">HTML</abbr> tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*