facebook
Luxtoday

AI: between revolution and threat - a view from the Rector of the University of Luxembourg

Last time updated
30.01.25
Steve Johnson, Unsplash

Steve Johnson, Unsplash

Artificial intelligence is a major technological trend today. However, the term "AI" is used so widely that it sometimes loses its meaning. Jens Kreisel, rector of the University of Luxembourg, gives an example - a toothbrush advertised as "AI-driven". For him, this is a typical case of marketing hype.

Nevertheless, the impact of AI on science and society is clear. In 2023, Nobel Prizes in chemistry and physics were awarded for AI-related research, emphasising its importance in modern discoveries. AI is no longer the privilege of a narrow circle of specialists - it is available to everyone. Using ChatGPT or other tools, we actually interact with powerful supercomputers and analyse large amounts of data.

The idea of AI has been around since the 1950s, but for a long time it remained largely an academic discipline. In the 1980s, development stalled - this period is even referred to as the "AI winter" due to funding cuts. The breakthrough came when AI merged with high-performance computing and big data analysis. Modern machine learning algorithms, such as deep learning, have learnt to process huge amounts of information and make accurate predictions.

Twenty years ago, AI, supercomputing and data analysis specialists worked in silos. Today, these three fields have merged into a single digital continuum that has become an indispensable tool for science and business.

Previously, research in this area was publicly funded, but this has changed. Large language models such as ChatGPT are now being developed in the private sector. In Time's list of the 100 most influential people in AI, almost all are from business. Twenty years ago, most of them would have been university academics.

Despite the dominance of the private sector, the line between science and business is blurring. For example, Nobel Prize winner in physics Jeffrey Hinton has worked at both the University of Toronto and Google Brain, and Demis Hassabis, who won a prize in chemistry, is CEO of Google DeepMind. This demonstrates that AI is the product of close co-operation between public and private entities.

Jens Kreisel, in an interview with Luxinnovation, calls AI a "key technology" that accelerates interdisciplinary research. Today's scientific problems - for example, the impact of the environment on health - cannot be solved within a single discipline. 80 per cent of interdisciplinary research at the University of Luxembourg is based on data, AI or supercomputers.

AI has already changed finance, law, manufacturing and cybersecurity, but the most sweeping changes are expected in biotechnology and medicine. The integration of AI with life sciences will open up new horizons - from medical discoveries to innovations in healthcare.

Despite the tremendous opportunities, AI also carries serious risks. One of them is the high level of energy consumption. Currently, the IT industry is responsible for 2.5% of global CO₂ emissions, and this figure will only grow. Innovations are needed to improve energy efficiency: creating "green" data centres, optimising computer chips and algorithms.

But the biggest danger of AI is related to democracy. Fakes and misinformation, including deepfake videos, are becoming increasingly difficult to detect. Previously, false information spread slowly, but today social media and commercial interests are amplifying this effect. Rapid and massive disinformation threatens the sustainability of democratic processes.

Some fear that AI could gain consciousness and get out of control, but Jens Kreisel believes these fears are premature. Human consciousness is not only related to data processing, but also to physiological sensations, emotions and the ability to think about long-term consequences. However, the development of AI requires careful attention and thoughtful regulation.

AI can benefit any industry, but its implementation requires a strategic approach. The University of Luxembourg emphasises several key principles.

Firstly, the best ideas come from employees working with real problems, be it finance or HR. Management shouldn't implement AI from the top down - it's more important to consider the needs of those who use data on a daily basis.

Secondly, the experimentation stage is important. People should be given the opportunity to learn AI tools on their own, test them in their field, and only then evaluate the impact on performance.

Third, it is critical to train employees. Experience shows that after the first tests, people feel more confident in working with AI.

Finally, it is worth respecting those who are not yet ready to use AI. This is a fundamental organisational transformation, and not everyone is able or willing to adapt to it immediately.

Send feedback
Last time updated
30.01.25

We took photos from these sources: Steve Johnson, Unsplash

Authors: Aleksandr