Healthy news

How does active use of artificial intelligence affect our brain function? Its role and use in a medical context

Artificial intelligence technologies, especially ChatGPT, have become an integral part of many people’s daily lives. Whether summarizing texts, analyzing data, or simply preparing recipes, this technology provides fast and convenient responses.

However, a question arises: what happens to our brains when we increasingly rely on artificial intelligence for functions such as thinking, writing, and problem-solving?

A recent study by the Massachusetts Institute of Technology in Cambridge, USA, suggests that people who regularly use ChatGPT may be doing more harm than good to their brains.


What was the purpose of the study?

The question that motivated the researchers was whether the use of artificial intelligence during writing affects the speed of cognitive processes.

To investigate this, three research groups were created:

  • Participants using an artificial intelligence tool such as ChatGPT;
  • Participants gathering information through traditional internet search methods;
  • Participants relying solely on their own knowledge without any external assistance.

The main aim was to assess the impact on cognitive functions such as attention, memory, language, and writing skills.


Study design and methods

This four-month study involved 54 participants. The participants were mainly university-educated young adults familiar with digital technologies. In the first phase, each participant wrote three essays according to their assigned group. In the fourth phase, the conditions were changed:

  • Participants who initially used ChatGPT were required to write without it.
  • Participants who initially did not use any tools were given access to ChatGPT.

During the writing process, brain activity was measured using electroencephalography (EEG), with a particular focus on alpha and beta brain waves (which are associated with attention, memory, and cognitive activity).

In addition, the texts were analyzed linguistically and evaluated both by teachers and an artificial intelligence system. Interviews were also conducted with the participants.


Results:

Brain activity:

  • Participants who wrote without any tools showed the highest level of cognitive activity.
  • Those who used search engines demonstrated a moderate level of brain activity.
  • ChatGPT users showed the lowest level of brain activity, particularly in areas associated with attention and memory.

Text structure and grammatical accuracy:

  • Texts generated with ChatGPT were grammatically correct but showed weaker levels of personal expression and creativity.
  • Participants who wrote based on their own knowledge produced more individual and creative texts, although they sometimes contained linguistic inaccuracies.
  • Search engine users demonstrated an intermediate performance between the two groups.

Memory retention:

  • Participants using ChatGPT had difficulty recalling and quoting the sentences they had written only a few minutes earlier.
  • Those who relied on their own knowledge demonstrated better memory retention and recall ability.
  • These findings suggest that when using artificial intelligence for writing, information is processed less deeply in the brain.

Individuals who first worked with artificial intelligence tools and then switched to writing without them showed signs of reduced cognitive engagement and slower mental processing, possibly due to adaptation or overreliance. In contrast, the opposite pattern was associated with an increase in brain activity. Initial exposure to artificial intelligence appeared to stimulate cognitive curiosity and engagement.


Why are these findings important?

This study shows that tools like ChatGPT, while providing convenience, may lead to a decline in cognitive abilities with long-term use. In particular:

  • Attention may decrease
  • Memory may weaken
  • Language and expression skills may become more passive
  • Creative and critical thinking abilities may be reduced
  • Limitations of the study


Study limitations:
The number of participants is relatively small (54 individuals), and the study duration (4 months) is not sufficient to demonstrate long-term outcomes. In addition, the way each participant interacted with ChatGPT was not standardized.


Conclusion and ways to maintain a balance in cognitive activity:
Artificial intelligence can be helpful, but improper use may also lead to passivity. With continuous use, memory may weaken, a sense of originality may decrease, and creative thinking may decline. Therefore, it is important to maintain a healthy balance when using artificial intelligence tools.

  • To achieve this, questions should be designed not only to obtain answers, but also to support learning.
  • On the other hand, forming one’s own thoughts and ideas on a topic first, and then comparing them with artificial intelligence outputs, encourages active thinking while also helping to explore new alternatives.

Use of artificial intelligence tools in medical contexts

Caution is essential when using ChatGPT in a medical context.

The most obvious limitation is that a chatbot cannot perform a physical examination, take blood samples, or carry out imaging tests such as X-rays. It also cannot conduct a detailed patient interview (anamnesis) based on real clinical interaction. As a result, the underlying medical issue may not be fully identified, and responses are based only on the symptoms and information provided by the user.

Another significant issue is the use of outdated information. The current ChatGPT-4 model is trained on data up to 2021, which means it may not reflect the latest medical developments and updates. In addition, diagnostic suggestions provided without clear references to medical sources raise concerns regarding reliability, as patients cannot verify the accuracy of the information. It is often unclear where the information comes from or what evidence it is based on.

In conclusion, while ChatGPT can provide fast, simple, and accessible answers to health-related questions, it also carries important risks, including the possibility of incorrect or inaccurate diagnoses, insufficient guidance, and outdated medical information.

Would you like us to call you?

Name Surename

Mobile Number