“Ethical AI must guide technological progress, ensuring people’s well-being without hindering innovation.”

22.10.2025

While AI represents an undeniable global opportunity, it also raises important questions about potential risks—not only from a technological or labour standpoint, but also from a social one. Should AI be ethical, and is it being developed that way? Dr. Edurne Loyarte, Director of Organisation and Governance at Vicomtech, shares key insights into this unique challenge.

Is AI being developed and deployed ethically?

Emerging digital technologies such as AI introduce complex ethical risks related to surveillance, manipulation, autonomy, and disinformation. This creates the need for legislation that matches the scale of these risks, as well as robust organisational ethical frameworks that reflect internal policies and values.

Within organisations, there is often a significant gap between ethical aspirations and the practical implementation of ethical models in research projects. The ethical use of digital technologies is a cornerstone of responsible research. In the Basque Country, following European guidelines and recognising the importance of ethics in these technologies, mechanisms are being developed (among them BAIC) to help organisations implement ethical models. The key lies in ensuring that people within each organisation genuinely embrace shared values and a common ethical framework.

Does the growing accessibility of AI make it more important to prioritise ethics?

AI development falls within the concept of digital humanism, which arises in response to the rapid evolution of digital technologies and their impact on human life, rights, social responsibility, and sustainability. An ethical model must guide technological progress so that it aligns with universal human and ethical values, ensuring people’s well-being without stifling innovation.

In what ways could AI fail to be socially responsible?

AI could be considered socially irresponsible if, for instance, it is driven solely by economic goals or technological capabilities rather than serving human development and social well-being. It could also fall short if it introduces unacceptable or complex ethical risks related to manipulation, autonomy, disinformation, or data protection. Finally, a lack of adequate evaluation and risk mitigation frameworks—or ethical mechanisms to manage the dilemmas emerging from these technologies—would also indicate a failure in social responsibility.

What are the key principles for ethical AI?

The main principles for ethical AI include:

  • Alignment with human and social values: The model must guide technological progress so that it aligns with universal human and ethical values, serving human development and social well-being.

  • Ethical principles of the European Commission: Including technical robustness, safety, transparency, data management and privacy, and social welfare.

  • Ethics-by-Design: Integrating ethical values directly into technical, management, and design decisions from the earliest stages of technological innovation.

  • Risk assessment and mitigation: Establishing processes that address dual-use (civil and military) technologies and enable risk assessment and mitigation.

  • Awareness and practical implementation: Ensuring that responsible research is “lived and breathed” by every individual, aligning organisational values with personal ones. The focus must be on ethics in practice, grounded in real-world dilemmas.

  • Governance structures: Establishing an external ethics committee to ensure impartial ethical review, with multidisciplinary members who understand the business context. Additionally, maintaining an independent ethics channel—separate from the conduct channel—dedicated exclusively to ethical incidents and requests.

Are there similarities between AI and previous technologies in terms of ethical development, or is this an unprecedented scenario?

Some sectors and technologies—such as healthcare, security and cybersecurity, language technologies, biometrics, or facial recognition—already involved ethical implications prior to AI. However, AI presents a “unique challenge,” implying key differences. Current research suggests the need for specific regulations and value structures that combine organisational guidelines with people’s practical experience, rather than adapting traditional ethical models (developed mainly for medical or social contexts). This indicates that while previous ethical models offer useful references, they are not fully adequate for AI.

Can you share examples of how Vicomtech integrates ethics into real AI projects?

Vicomtech has implemented a validated ethical model, tested through a pilot project (Starlight – H2020) and across more than one hundred active research projects covering different technologies, departments, and TRLs (Technology Readiness Levels). All of them undergo ethical evaluation to assess risks and define appropriate mitigation measures during development.

In this respect, a key achievement has been bridging the world of digital technologies—with its legal and management requirements—and the fields of ethics and philosophy. To this end, Vicomtech has worked with Globernance, the Democratic Governance Institute based in Donostia–San Sebastián.

 

Thanks a lot to Grupo SPRI!

Vicomtech

Parque Científico y Tecnológico de Gipuzkoa,
Paseo Mikeletegi 57,
20009 Donostia / San Sebastián (Spain)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbao (Spain)

close overlay

Behavioral advertising cookies are necessary to load this content

Accept behavioral advertising cookies