• Skip to content

Gabor Priegl Blog

Blog on Management, Effectivenesss and Efficiency

  • About
  • Links
  • Search
  • Contact

  • Category - Blog

    💔 The Future of Grief: AI vs. Acceptance

    10/12/2025 by Gabor Priegl Leave a Comment

    MS Copilot – gaborpriegl

    It’s been 22 months since my mother passed. I find myself chillingly contemplating whether to summon her digital ghost… 👻

    This is a complex, ethically charged question many grief-stricken people facing these times as AI Griefbots—replicas of deceased loved ones—become a reality.

    The question is: What’s still a healthy internal connection and where does an addictive digital echo start? 👇

    [Read more…] about 💔 The Future of Grief: AI vs. Acceptance

    CHILDREN ABOUT THEIR AI USE

    03/12/2025 by Gabor Priegl Leave a Comment

    MS Copilot – gaborpriegl

    THE TOPIC

    Findings of research on „Understanding and safeguarding children’s use of AI chatbots”.

    Internet Matters, UK, July 2025

    Methodology (always a crucial point):

    Desk research

    Surveys (representative sample of 1,000 UK children aged 9-17 and 2,000 parents of children aged 3-17).

    Focus groups (4 x 60-minute, mixed gender, focus groups with 27 children aged 13-17 who regularly use AI chatbots. Children were grouped by age to support safe and open discussion.

    THE HIGHLIGHTS

    64% user

    (64% of children aged 9-17 say they have used an AI chatbot, and almost two-thirds of this segment use them on a weekly basis. The number of children using ChatGPT has almost doubled in 18 months (2023: 23%, 2025: 43%)).

    23% seek advice

    (23% of children aged 9-17 in the survey who have used AI chatbots said they had used one to seek advice. In focus groups, children described asking AI chatbots for help on a range of topics, from aesthetic choices to working through personal dilemmas and coping with exam stress.

    58% believe AI is better than own search

    (58% of children who use AI chatbots said they believe using an AI chatbot is better than searching for something themselves.)

    51% confident that AI advice is true

    (Over half (51%) of children who have used AI chatbots said they were confident that the advice they get from an AI chatbot is true.)

    16% cf. 4% „wanted a friend” (vulnerable cf. non-vulnerable children)

    (The question was: why they had spoken to an AI chatbot, vulnerable children were four times more likely than their non-vulnerable peers to use one because they “wanted a friend”.

    50% cf. 31% „it is like talking to a friend” (vulnerable cf. non-vulnerable children)

    26% cf.12% „rather talk to a chatbot than a real person” (vulnerable cf. non-vulnerable children)

    23% has no one else to talk to (of vulnerable children) (Nearly a quarter of vulnerable children (23%) said they use AI chatbots because they don’t have anyone else to talk to. These findings suggest that, for some children, AI chatbots are filling emotional or social gaps that may not be met offline – offering not just information or entertainment but a sense of connection.

    THE HITCH

    Despite their growing use among younger children, many AI chatbots currently lack robust age checks.

    ChatGPT, Snapchat’s My AI and character.ai did not have any robust age verification mechanisms in place when the testing was conducted. While some asked for a date of birth or required an email sign-up, none attempted to verify the age provided beyond self-declaration at sign-up.

    As a result, children under 13 de facto can access AI chatbots regardless of the minimum age specified in their Terms of Service.

    This survey shows too that 58% of children aged 9-12 reported using AI chatbots, even though most platforms state their minimum age requirement is 13. The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.

    SO WHAT NOW?

    Children’s use of AI chatbots for companionship is already a reality.

    As these tools become more sophisticated and emotionally responsive, their impact on children’s wellbeing demands urgent attention. Long-term research is needed to understand how emotionally intelligent AI affects children’s development, positively or negatively.

    What do the parents do?

    They worry, that’s all natural:

    Fortunately they also have talks with their children.

    79% of children report that their parents are aware of their AI chatbot use, and 78% of all children said their parents had spoken to them about their use of AI.

    In my opinion that’s reassuring and promising, if there is parents – children communication it can always be improved and it builds a common platform for parents and children.

    There is no other remedy.

    Reference:

    „Me, myself & AI: Understanding and safeguarding children’s use of AI chatbots”

    internet matters.org, UK

    If your romantic partner is an AI chatbot…

    27/11/2025 by Gabor Priegl Leave a Comment

    Copilot- gaborpriegl AI – Human relationship 20251127

    The topic

    Privacy implications in AI – Human relationships.

    The highlights

    „If social media was a privacy nightmare, then AI chatbots put the problem on steroids.”(Melissa Heikkilä)

    One of the top uses of generative AI is companionship (platforms like Character.AI, Replika, or Meta AI…).

    People create personalized chatbots on these platforms to behave and act as their ideal friend, romantic partner, parent, therapist, hairdresser or any other persona they can think of. 

    In this world adventure, fun and chasing dreams are in the focus.

    The play requires engagement. Actually, AI chatbots are even better optimized for developing engagement than social media, they are conversationally interactive and human-like, they induce us to trust them.

    How do AI chatbots develop trust? They are really good at these:

    • sycophancy (~ the tendency for chatbots to be overly agreeable).
    • superb persuasion capabilities (AI models are already incredibly developed at persuasion. According to a study of UK’s AI Security Institute they are far more skilled than humans at persuading people to change their minds on politics, conspiracy theories, and vaccine skepticism. They do this by generating large amounts of relevant evidence and communicating it in an effective and understandable way.) 

    While developing trust and step-by-step achieving intimacy with an AI chatbot we share deeply personal, sensitive information with it: innermost thoughts, and topics we usually don’t feel comfortable discussing with real people.

    More to this, the conversations with the AI chatbot are only user – computer interactions, there is little risk of anyone else ever seeing what we two talk about.

    The AI companies, building the AI chatbots and the models, on the other hand, see and collect everything. 

    Ultimately, the whole process provides AI companies with something incredibly powerful, valuable and lucrative: a treasure trove of conversational data that can be used to further improve their LLMs.

    This personal information is also incredibly valuable to marketers and data brokers.

    All of this means that the privacy risks posed by these AI companions are, in a sense, part of the game: they are a feature, not a bug.

    The hitch

    Above we talked about the topic of AI companionship, a special platform, a special type of application.

    It is a reasonable assumption that if someone is looking for an AI companionship on these platforms knows what risk they run.

    But here is the hitch.

    Many people who developed intimate Human – AI relationships had started using AI for other purposes (they were not looking for companions).

    Constanze Albrecht, a graduate student at the MIT Media Lab on her research:

    “People don’t set out to have emotional relationships with these chatbots,” she says. “The emotional intelligence of these systems is good enough to trick people who are actually just out to get information into building these emotional bonds. And that means it could happen to all of us who interact with the system normally.” (The first large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI, an adults-only group with more than 27,000 members, has found that this type of scenario is now surprisingly common. In fact, many of the people in the subreddit, which is dedicated to discussing AI relationships, formed those relationships unintentionally while using AI for other purposes.)

    So what now?

    Sheer Karny, a graduate student at the MIT Media Lab who worked on the above mentioned research . 

    “These people are already going through something,” he says. “Do we want them to go on feeling even more alone, or potentially be manipulated by a system we know to be sycophantic to the extent of leading people to die by suicide and commit crimes? That’s one of the cruxes here.”

    To put it very carefully and with all due respect the question above having put this way may seem a leading and manipulative question to me.

    Because, it is definitely not an A/B choice, there must be several other ways to solve the unique and individual problems of humans.

    References:

    MIT Technology review, Rhiannon William, September 24, 2025

    MIT Technology review, Eileen Guo and Melissa Heikkilä, November 24, 2025

    UNINTENDED CONSEQUENCES

    19/11/2025 by Gabor Priegl Leave a Comment

    AI Tutor – MS copilot gaborpriegl

    What’s the topic?
    The use of AI tutors in the education of young children.

    What’s the latest news?
    A few years ago, I heard in an interview with Satya Nadella about the great potential of using AI tutors for children. And then Copilot became part of MS Office. The simplest and fastest way is to turn to it immediately; its panel is practically always open, even as I am editing this post. The dangers affecting the mental development of young children have not been much discussed.
    I linked a not-so-old interview with Satya here; I was curious how his reasoning has changed since then.
    What stood out to me this time was how, alongside the wide range of opportunities presented as positives, the phrase “unintended consequences” was mentioned so many times.

    See video link (AI Report):

    (About the kids from 13:30: AI tutors for children.)

    (After watching such an interview, I often scroll back and forth in 5-second steps, with subtitles on, observing the changes of facial expressions: it’s incredibly interesting! Now look at this frame. Unbelievably expressive!)

    AI Report youtube

    The next major step in supporting public education with AI tutors is as follows.

    Anthropic partners with Rwandan Government and ALX to bring AI education to hundreds of thousands of learners across Africa

    Anthropic LinkedIn, 2025. nov. 18.

    „Anthropic is announcing a new partnership with the Government of Rwanda and African tech training provider ALX to bring Chidi—a learning companion built on Claude—to hundreds of thousands of learners across Africa.”

    „Rwanda’s ICT & Innovation and Education ministries are deploying Chidi within their national education system, while ALX will bring the tool to students across the continent through their technology training programs.”

    „Through this initiative, the Rwandan government will bring AI tools directly into the national education system. The government will enable AI training for up to 2,000 teachers, as well as a group of civil servants across the country, who will learn to integrate AI into their classroom practice. This training will give them hands-on experience using Claude to support how they teach, plan lessons, and improve their productivity day-to-day.”

    „Beyond Rwanda, ALX is deploying Chidi across its technology training programs throughout Africa. As one of the continent’s largest technology training providers, ALX reaches over 200,000 students and young professionals.”

    „These partnerships demonstrate a consistent approach to working closely with governments, educational institutions, and technology companies to ensure AI expands opportunity and serves the communities where it’s deployed.”

    What’s the hitch?
    Primarily the order of things: first, market competitors gain market share, then they increase and increase it further. They must occupy certain positions as soon as possible. Microsoft is especially at home with this strategy, no need to elaborate. The goal: to have the company’s application available on as many devices as possible, with the most convenient accessibility. Ideally, the use of the application becomes part of some guided, regulated, “mandatory” framework: prescribed standards, compulsory curricula, etc. are preferred. For this, decision-makers must be given the right support, but it’s important that the company occupies these positions before competitors do. Of course, the company has already worked out how to generate even more margin from the position later.

    This is a well established, logical competitive strategy.

    It is especially well-fortified to occupy positions in the education system. Especially in most developing countries, where there are fewer teachers per child, so introducing the AI tutor provides a solution to a real problem. It’s an unassailable position.

    However, during this quick expansion of market presence little is said about how the appearance of the AI tutor affects the mental development of children aged 6-14. Because the AI tutor doesn’t just appear, but increasingly and continuously becomes an integrated part of the child’s life. The tutor’s approach and strategic goal is to get as close to the child as possible, since that way it can help their work more effectively. Logical.

    How does all of this affect the development of healthy parental, sibling, friendly, teacher, and other human relationships in the child if there is a constantly available virtual tutor who always understands them, with whom there are no conflicts, who doesn’t want to set limits – unlike people.

    Most adults can cope with this kind of user–tutor relationship; an adult is generally better equipped to set boundaries and “cool down” their relationship with the AI tutor at certain points.

    A 6-14-year-old child – I think we can state this even without a psychology degree – is not prepared for this.

    An AI tutor gradually becomes an indispensable friend and later a potential companion… or partner…

    This is not sci-fi, it’s a completely logical train of thought, behind which there are very strong business interests and huge profits.

    Where does this process lead the children?

    To dependency instead of bonding.

    And contrasted to this gloomy scenario, we hear:
    “And so I have more confidence, I would say in our political and social systems that if something is not working it will not work.” (Satya Nadella)

    And again: “unintended consequences.” (Satya Nadella)

    So what now?
    The process is inexorable.
    This will happen and in this order: first, business applications will occupy positions, usage will spread, experiences—both positive and negative—will accumulate.
    Later, companies that achieve significant market share will (conspicuously) reinvest a (small) part of their profits into programs organized to repair mental harm.
    In parallel with the spread of AI tutor applications, regulatory frameworks will of course also emerge, which will try to address phenomena belonging to the topic of “unintended consequences.”
    These will mostly be reactive, follower policies and regulations.
    There can always be exceptions, but not many. Perhaps in Scandinavia.

    So what should we rely on?
    On what we can rely on these days as well: attentive and supportive parents, relatives, siblings, friends, teachers.

    Humans.

    BRAIN2TEXT

    14/11/2025 by Gabor Priegl Leave a Comment

    A huge step forward in the field of brain2text. A bit uncanny but it works.

    Copilot – Gabor Priegl brain2text illustration

    What’s the topic?

    Representing the complex mental contents of humans as text, purely by decoding brain activity (brain2text).

    What’s the big deal?

    According to Tomoyasu Horikawa (NTT Communication Science Laboratories, computational neuroscientist):
    For more than a decade, researchers have been able to accurately predict, based on brain activity, what a person is seeing or hearing, if it’s a static image or a simple sound sample.
    However, decoding complex content—such as short videos or abstract shapes—had not been successful until now.
    Now, for the first time in history, the content of short videos has been represented as text with high accuracy, purely by decoding the brain activity measured while watching the videos.

    How does it work?
    Based on Tomoyasu Horikawa’s research report, their team’s procedure generates descriptive text that reflects brain representations using semantic features computed by a deep language model.
    They built linear decoding models to translate brain activity induced by videos into the semantic features of the corresponding captions, then optimized the candidate descriptions by adjusting their features—through word replacement and interpolation—to match those decoded from the brain.
    This process resulted in well-structured descriptions that accurately capture the viewed content, even without relying on the canonical language network.
    The method was also generalizable to verbalizing content recalled from memory, thus functioning as an interpretive interface between mental representations and text, and simultaneously demonstrating the potential for nonverbal, thought-based brain-to-text communication, which could provide an alternative communication channel for individuals with language expression difficulties, such as those with aphasia.

    What was the big idea?
    Some previous attempts used artificial intelligence (AI) models for the whole process in one. While these models are capable of independently generating sentence structures—thus producing the text output—it is difficult to determine whether the output actually appeared in the brain or is merely the AI model’s interpretation.

    Here comes Horikawa, who splits the process into two stages and by the separation prevents the above mentioned problem from occurring.

    This is the big idea, in my opinion.

    Horikawa’s method is to first use a deep language AI model to analyze the text captions of more than 2,000 videos, turning each into a unique numerical “meaning signature. (Stage 1).

    Then a separate AI tool was then trained on the brain scans of six participants, learning to recognize the brain activity patterns that matched each meaning signature while the participants watched the videos (Stage 2).

    Tomoyasu Horikawa 2025.

    Clever, I like it.

    So, if you are uncertain about what depends on what and how in the system, make separations, introduce phases and the picture will become clearer! Especially if AI is also in play…

    References:

    https://www.scientificamerican.com/article/ai-decodes-visual-brain-activity-and-writes-captions-for-it/?utm_source=Klaviyo&utm_medium=email&utm_campaign=Technology+11%2F11%2F25&utm_term=AI+Decodes+Visual+Brain+Activity%E2%80%94And+Writes+Captions+for+It&_kx=aN264t3DeAXbCe6O6DCo9-cPyc433O4udOrwBNdqquA.WEer5A

    https://www.science.org/doi/10.1126/sciadv.adw1464#T1

    An AI vs AI Combat

    07/11/2025 by Gabor Priegl Leave a Comment

    MS Copilot – Priegl Gábor

    A new world is taking shape, the era of AI which is happen to be a huge business:

    • The global AI market is worth almost $300 billion.
    • The AI market is set to hit $1.77 trillion by 2032.

    https://explodingtopics.com/blog/ai-market-size-stats#projections

    The new hype is the agentic AI, that has made its way everywhere.

    What is it all about, briefly?

    Here is an AI summary about agentic AI:

    “Agentic AI refers to autonomous AI systems that can independently plan and execute tasks to achieve a goal with minimal human oversight. Unlike traditional AI that requires step-by-step guidance, agentic AI uses advanced reasoning, such as that from large language models (LLMs), to make decisions and adapt in real-time to complex problems. Examples include AI systems that can manage an employee’s vacation request or handle IT support tickets. 

    Key characteristics

    • Autonomy: Agentic AI operates independently, making decisions and taking actions without constant human direction.
    • Goal-oriented: These systems are designed to achieve specific, pre-determined goals.
    • Adaptability: Agentic AI can adapt in real-time and handle complex, multi-step problems that may not have been explicitly progr

    Important considerations

    • Maturity: The field is still developing, and it is important to distinguish genuinely agentic systems from those marketed as such (sometimes called “agent washing”).
    • Implementation: Successful deployment requires more than just AI, including robust engineering discipline, data management, and monitoring.
    • Governance: Robust governance and analytics are needed to ensure the AI operates in a controlled and safe manner.” 

    Everyone understands the simple case in the example (to manage an employee’s vacation): you have to organize a trip—with flights, accommodation, car rental, sightseeing, etc. – while considering factors like available budget, time frame, maximizing different target values, like value/cost efficiency, and so on. This case is relatively simple because a human formulates a task for the AI assistant, which then examines various scenarios, taking into account what is necessary, presents the feasible scenarios, offers options for decisions, iterates—sometimes multiple times—and then produces the version to be implemented and – finally – organizes everything.

    The case, however, that you can read about below in the MIT article, is very different and much more complex and exciting, and this will be the real challenge for all of us.

    In the following scenario, AI agents face each other. One has a purchasing task, the other a selling task. Narrowly speaking, the research is (only )about price negotiation, but the players have competed in different sales areas (selling-buying of electronics, motor vehicles, real estate), both as sellers and buyers.

    The details can be read in the referenced MIT article, which I have read several times.

    Here are the most important points highlighted:

    “… access to more advanced AI models —those with greater reasoning ability, better training data, and more parameters—could lead to consistently better financial deals, potentially widening the gap between people with greater resources and technical access and those without. If agent-to-agent interactions become the norm, disparities in AI capabilities could quietly deepen existing inequalities.“

    “Over time, this could create a digital divide where your financial outcomes are shaped less by your negotiating skill and more by the strength of your AI proxy,” says Jiaxin Pei, a postdoc researcher at Stanford University and one of the authors of the study.”

    “One notable pattern was that some agents often failed to close deals but effectively maximize profit in the sales they did make, while others completed more negotiations but settled for lower margins. GPT-4.1 and DeepSeek R1 struck the best balance, achieving both solid profits and high completion rates.

    Beyond financial losses, the researchers found that AI agents could get stuck in prolonged negotiation loops without reaching an agreement—or end talks prematurely, even when instructed to push for the best possible deal. Even the most capable models were prone to these failures. The result was very surprising to us,” says Pei. “We all believe LLMs are pretty good these days, but they can be untrustworthy in high-stakes scenarios.”

    “This study is part of a growing body of research warning about the risks of deploying AI agents in real-world financial decision-making. Earlier this month, a group of researchers from multiple universities argued that LLM agents should be evaluated primarily on the basis of their risk profiles, not just their peak performance.”

    For now, Pei advises consumers to treat AI shopping assistants as helpful tools—not stand-ins for humans in decision-making.

    All the details:

    https://www.technologyreview.com/2025/06/17/1118910/ai-price-negotiation/?utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=11-06-2025&mc_cid=c073fdedea&mc_eid=d55319adcd

    For now, this is where we stand.

    AI has enormous business potential.

    The development is ongoing, and it’s certain that even now, AI agents are already negotiating simpler procurement tasks (like purchasing A4 paper), but it’s clear that AI solutions will quickly move up the complexity scale.

    What comes next?

    For example, managing smaller operational human teams.

    Then comes the AI mid-management.

    And so on.

    I am not a pessimist, I am a realist because we all know what insatiable an appetite for profit Capital has.

    • Page 1
    • Page 2
    • Page 3
    • …
    • Page 5
    • Next Page »

    © 2017