AI and Crisis Management
How can we use AI to enhance the work and impact of youth-led organizations across Germany, the Baltic states, Europe, and the World?
by Tanishka Murthy
In the beautiful city of Vilnius, Lithuania, selected individuals representing different organizations across Europe, gathered together to discuss the future of AI in regards to managing crises, and how technology can be used to improve the efficiency and impact that youth-led organizations can have. Participants attended several workshops/seminars and keynote speeches by the Ambassador of Germany to the Republic of Lithuania, the Lieutenant General of Multinational Corps Northeast, Experts in Information Warfare, and Vilnius City Council members to name a few.
While this conference presented a highly engaged and responsible youth population across the wider Europe area, it also presented the need for further critical thinking in the sphere of technology such as AI, and the work of youth organizations worldwide. It is not an easy task to create guidelines for the use of AI. In a span of 4 days, participants were tasked with learning about AI theoretically, the mechanisms and calculations behind the actual technology, the dangers that come with it, and the infinite possibilities which it holds. Doing all of this, while simultaneously being critical and hypothesizing what the future of the technology would hold requires an immense amount of dedication.
As the Lieutenant General said in his keynote address, “Don’t accept that things happen without wanting to influence them.” These young leaders of Europe were a glimpse into the possibilities of the future, simply because we do in fact have people who refuse to accept that things happen regardless of our abilities to influence them. Whether it be through learning about AI or any other framework, this idea that we are the present generation that gets to decide the future which is to come, makes us strong enough collectively to decide how exactly we want to achieve this. We have the freedom to make decisions, and if these few days learning about the potential dangers of technology has taught us anything, it’s that we have the obligation to make the right ones too. Decisions on use of technology which not only benefits organizations, institutions, and individuals of the present day but also of the future.
AI and Misconceptions
Through discussions with workshop coordinators as well as participants, it had become evident that AI had become a buzzword of sorts which carried with it more negative connotations than it did positives. It seemed to be that there was a need to understand the misconceptions, why they existed, and what could be done to correct them first and foremost. The first misconception is that AI is something which is already fully established. In other words, Artificial Intelligence is something which is still being developed. What we see today in most cases are stepping stones for a form of Artificial Intelligence that could exist in the near future. By understanding that this is a technology which is still developing, specifically through Gretel Juhansoo’s workshops, participants seemed to shift their mindsets and approach the technology as something which would still be heavily impacted through our opinions and actions. The second misunderstanding is that AI can stand on its own and produce its own work. While the debate on this specific aspect of AI is still progressing, it was emphasized time and time again during these few days that AI is simply a tool. It is not meant to replace the entirety of what we already do and it is certainly not meant to be human. While the irony does remain that we are constantly trying to make AI more human-like, it was not intended to replace our cognitive abilities.
AI and the Debates Surrounding it
From this topic of whether AI can be a producer of work, the discussions shifted. The questions became, and were very eloquently posed during the conference as, “Does AI replace you asking questions?”, and “How can we prevent losing our own competency?.” These were some questions that raised the loudest voices. This was soon followed by the discussion on “authorship.” Can AI ever be the true author/creator of a work? What determines the authenticity of what it produces? While conclusions are never as straightforward as the questions themselves, the GBYEN members gradually became more intentional in their discussions, recognizing that thinking about AI and its implementation for their respective organizations, means seeing it as a tool to enhance our abilities rather than replace them. Members had become aware of the misconceptions and able to consider pre-existing notions before jumping to conclusions. The question on authorship seemed to be the most difficult debate of all, bringing about no clear answer from any group and more questions to be debated upon. Perhaps this was a point of realization that while guidelines for a specific aspect of AI may be useful, in this case giving authorship or not, there may be value in considering whether or not we have enough information about the technical elements of how the AI functions in order to create these guidelines. Members realizing that there needs to be further discussion, seemed to be a positive signal that these workshops not only enhanced participants' knowledge but also critical thinking abilities in regards to their own depth of understanding.
AI and Disinformation
Akey step for the GBYEN group in identifying the needs with regards to AI was understanding how it can be weaponized. As emphasized in the panel with Ms.Viola von Cramon, and Mr. Viktor Denisenko, “disinformation is used as a tool to legitimize aggression.” AI can and sometimes is used as a creator and distributor of disinformation. GBYEN members were constantly reminded over these few days that assuming the authenticity of AI generated information is very easy. Information that seems accurate enough is simply not enough anymore. While participants demonstrated a commendable prior understanding of how disinformation can be created and spread, it seems as though the value lies most in considering it from the perspective of youth-led organizations. As these organizations need to grow gradually, training new members will be essential in its growth. Part of training new members who can have a larger impact in Europe and globally is sharing the knowledge on how to differentiate accurate information from inaccurate or potentially harmful information.
The Future of AI in our Context
One of the most impactful portions of this conference, in terms of learning about the mechanism of AI, was the workshop conducted by Jost Wiethölter, wherein participants were able to simulate AI and the calculations it does with the use of paper cups and small pieces of paper with numbers written on them (i.e. The Nim game). Not only was this simulation game a natural icebreaker for participants but brought about a significant realization:
1. AI is created through the concept of eliminating all possible mistakes. Simulating AI means putting your human side behind.
The mechanism behind artificial intelligence has a lot to do with repeating the same action until a system is created which eliminates the possibility of mistakes. This is not an inherently human characteristic. This means that when we try to understand how AI can help us, whether it be in our daily lives or in our respective organizations, we would have to minimize our expectations of it being anything resembling human beings, and treat it as a tool which allows us to enhance our pre-existing abilities. As a participant very clearly phrased it, we have to “put our human side behind” to understand AI.
So, where does that leave us, as young leaders of Europe and the world, when it comes to our use of AI in order to manage crises and have a bigger impact through our organizations and initiatives? After 4 days, several workshops, inspiring keynotes, and countless conversations, the answer seemed to lie in education. Not just education as a whole, but specifically educating more teachers and industry professionals in different sectors, on the technology behind AI, as well as the safe and ethical uses of it. With award winning programs such as the Teachers Lead Teach Initiative expanding, we are seeing more so than ever the demand and need for a better understanding of this valuable addition to the future of technology. While it is equally important to consider implementation within organizations, being students gives us an opportunity to also encourage institutions to accelerate the investment in “educating the educators” in regards to AI and developing technology.
Keep Questioning, Keep Trying
The final goal, specifically for the GBYEN group, was to create guidelines for the use of AI in organizations and beyond. The following are the titles of the guidelines created:
1. Assess Needs and Objectives
2. Ensure Secure Access for Team Members
3. Involve Youth in the Process
4. Choose User-friendly Tools
5. Provide Training and Resources
6. AI-Driven Data Management
7. Supporting the Creation of AI Which Puts People’s Needs and Well Being First
8. Automation in Administration
9. Monitoring and Evaluating
10. Ethical AI Use (transparency, inclusivity and data privacy)
Participants were incredibly critical for the entirety of the discussion. As a journalist covering this group, this was empowering to see. The evident difference in expertise within the group, the skepticism exhibited by many, and the confident voices asking why the current way we see AI is the best, if at all, have all been displayed in these guidelines that they were able to create. Why? Because this is not easy. This is not supposed to be a simple answer where one action will automatically solve every safety or ethical issue which surrounds AI and its use. Members of the group were quick to recognize that despite being in several workshops and having heard from many experienced industry professionals, coming up with solutions is not half as easy as recognizing the problems.
Many suggested that the questions which were meant to support them in forming the guidelines were too general, making it difficult to find a common ground for discussion. On top of that, since each member was speaking from their own personal experience from within their organizations, their needs are different, making the guidelines relevant to them also different. This prompted the idea that there needs to be continuous critical thinking in regards to AI, taking into account individual contexts and temporality. With time and improved understanding of the technology, our ability to be critical will also improve, making it more meaningful to reevaluate the guidelines and potentially restructure them at the time.
It is not enough to leave this discussion here after 4 days. The research must continue, the debate must continue. We cannot accept that things happen without wanting to influence them. We cannot accept that new technology can do harm without using our knowledge to find ways to minimize it. We must be critical, we must keep asking questions, and we must always find ways to influence the world around us.
If you are interested in the AI guidelines of GBYEN, take a look here!
Graphic Recording by: Agne Rapalaite-Rasiule, visualmind.lt
Photos by: Simonas Lukoševičius (Instagram: @simonas_luko)