
Welcome to the 10 new deep divers who joined us since last Wednesday.
If you haven’t already, subscribe and join our community in receiving weekly AI insights, updates, and interviews with industry experts straight to your feed.
DeepFest community, we have a question for you: What are the ethical concerns that are keeping you up at night?
As the potential for AI to have a positive impact on our lives continues to grow, so too does the potential for it to increase inequalities and put individuals and organisations at risk.
We’ve been asking AI leaders about the issues in AI ethics that are causing them the most concern right now. And we want your perspective too.
We all know it: AI systems learn from data, and flawed or incomplete data creates flawed or incomplete outputs from AI. This isn’t just a technical annoyance, but an issue with serious implications in the real world – with the potential for discrimination and poor decision-making around every corner.
But there’s hope. Marcello Mari (CEO of SingularityDAO) believes blockchain could play a critical role in mitigating AI bias:
“...the transparency of blockchain data can play a significant role in combating bias within AI systems. Firstly, because blockchain’s immutable ledger ensures that all data inputs and AI model changes are recorded transparently. This allows stakeholders to trace the origins of training data, monitor the evolution of the model, and audit decision-making processes.”
Decentralised by design, blockchain could help democratise access to diverse datasets and reduce the influence of just a few dominant players who might control biased data sources. And Mari sees it as a step towards fairer AI.
“Blockchain can also facilitate decentralised data marketplaces where diverse datasets are securely shared and accessed,” he said. “This has the potential to democratise access to data, ensuring that smaller organisations and underrepresented groups contribute to, and benefit from, the AI ecosystem.”
Concerns about the safety of AI for humans – and especially the risks of AI super intelligence, when it becomes a reality – are becoming increasingly urgent. Roman Yampolskiy (AI Author and Director at Cybersecurity Lab, University of Louisville) said:
“One of the pivotal moments in my career was when I fully grasped the potential and the risks associated with superintelligent AI systems. This realisation propelled me to focus on AI Safety and Security – fields I consider crucial for the responsible development of AI technologies.”
And although awareness of AI safety has improved, Yampolskiy thinks we still have a long way to go before AI development can really happen with human safety at its heart.
“If I could change one aspect of current AI development, it would be to instill a stronger culture of 'safety-first' in the AI community,” he said. “This involves prioritising long-term implications over short-term gains and integrating ethical considerations right from the early stages of AI design and development.”
For many of the AI leaders in our community, the speed of AI development itself is a primary concern. It’s exciting, obviously – but the fast pace of AI evolution means it’s hard for those concerned with safety to keep up.
Sol Rashidi (Data & AI Advisor & Former SVP & CAO, The Estée Lauder Companies) put it like this:
“The pace of change is both exciting and frightening. For those of us who have been in the space, the pace of change is the fastest we've seen it, and without proper guidelines, principles, protocols, regulation, it can rocket launch into unbelievable innovation while also creating havoc on humanity.”
One major concern, in light of this incredible speed of development, is the quality of data being used to train AI models. We’ve mentioned this already, but it’s worth bringing the point home; Rashidi warned that feeding AI misinformation could have dangerous consequences.
“After all, everything is being trained on data,” she pointed out; “data we create, generate, and train on, so if we're feeding it fiction vs. fact, we have the potential of creating a very precarious situation for us.”
Elizabeth Adams (Affiliate Fellow at the Stanford Institute for Human-Centered AI, and Former Chief AI Ethics Advisor) highlighted the distinction between AI bias and AI harm.
“AI bias involves the presence of systematic and unfair favoritism within an AI system, often stemming from biased training data,” she said. “On the other hand, AI harm encompasses the tangible and intangible consequences that result from biased AI outcomes, extending to real-world impacts on individuals and communities.”
Her work is focused on developing responsible AI frameworks that integrate ethical considerations at every stage of AI development. She believes that AI must be built on key principles such as fairness, transparency, and inclusivity:
“Responsible AI, for me, involves the ethical and accountable development, deployment, and use of AI systems. My exploration, framed through my Leadership of Responsible AI conceptual model, emphasises broad employee stakeholder engagement.”
Adams also noted that beyond general ethical principles, three key areas need urgent attention:
Every AI expert we’ve spoken to agrees that AI must be developed responsibly; with safety, fairness, and accountability at its core.
We want to know about the ethical concerns that are keeping you up at night. What’s your biggest worry – and what do you wish the global AI community would do better?
Join us at DeepFest 2026 to immerse yourself in the conversation. Because ethical debates in AI affect all of us, so every single one of us should have a voice.