Facing the opportunities and risks of Artificial Intelligence
Jorge Posada. Associate Director, Vicomtech
There is an open debate in different areas of society and business about Artificial Intelligence (AI). We are constantly receiving multiple news multiple news that show us the benefits of technological advances related to AI that were unthinkable until very recently, which open very promising horizons. Smart factories and products, connected-cooperative and autonomous mobility, personalized medicine, smart energy grids, digital security, virtual assistants, machine translation, ... In all these areas, and in many more, there are innovations coming from AI that show us an undeniable direction of progress.
However, we also receive information about some of the risks that AI could pose if it is not properly controlled, and these often have a significant media impact. Some information is more sensationalist, without rigour (conspiracy-type, "AI is already dominating us", "we will lose all our jobs"...) and others are more serious, such as the recent dissemination of statements by leading AI researchers on the need to mitigate the possible risks of its misuse (e.g., "Statement on AI Risk" or the "Pause Giant AI Experiments"). Or an approach that I consider much more successful, that of the European Commission and its AI-Act, which aims to regulate AI from a perspective based on degrees of risk (unacceptable, high, or limited), and on transparency requirements for Generative AI systems -such as ChatGPT-. This will most likely be the first detailed international law regulating AI and its preliminary approval by the European Parliament will be an international milestone.
In this context, I consider it important to make some recommendations from the perspective of a Technology Centre such as Vicomtech, which has specialised for decades in Research, Development and Innovation in Artificial Intelligence and Visual Computing technologies and systems, with the purpose of improving the competitiveness of our companies and our society.
First: It is important to contextualise and demystify Artificial Intelligence systems. We are not talking about Aladdin's Lamp, but neither about Pandora's Box. AI is a scientific-technological discipline that has been in existence for some 70 years and has shown that it is a clear driver of innovation, made by people and for people. And we have an excellent scientific-technological and business fabric in the Basque Country that can help us to understand it, develop it and use it for the benefit of our society (as in BAIC and BRTA).
Second: The big debates about the future of AI should not hinder the practical, actionable approach in our businesses. Many of the real problems we face, immediate or long-term, can be tackled from specific AI systems (from so-called "narrow AI") that attack specific problems today, with immense potential for innovation, and fully compatible with existing legal frameworks and data protection considerations.
Third: We must actively look for opportunity scenarios in which human intelligence is enhanced and extended by artificial intelligence. In our experience, those are the scenarios that have had the greatest impact and added value for the companies and institutions we work with.
Fourth: Ethics, standards, regulation, and certification will be key aspects to consider from a practical perspective. As with any advanced technology, these aspects will determine the successful implementation of Artificial Intelligence systems and will help to mitigate the associated risks. We must be fully aware of their relevance.
In summary, Artificial Intelligence systems are already changing our businesses and our lives. AI is neither a utopia nor a dystopia, it is a reality. Its evolution has shown that it has extraordinary innovation potential, and we need to prepare for the next steps, but in a proactive way, helped by experts in its development, regulation, and use.