• Skip to content

Gabor Priegl Blog

Blog on Management, Effectivenesss and Efficiency

  • About
  • Links
  • Search
  • Contact

  • Category - Blog

    BRAIN2TEXT

    14/11/2025 by Gabor Priegl Leave a Comment

    A huge step forward in the field of brain2text. A bit uncanny but it works.

    Copilot – Gabor Priegl brain2text illustration

    What’s the topic?

    Representing the complex mental contents of humans as text, purely by decoding brain activity (brain2text).

    What’s the big deal?

    According to Tomoyasu Horikawa (NTT Communication Science Laboratories, computational neuroscientist):
    For more than a decade, researchers have been able to accurately predict, based on brain activity, what a person is seeing or hearing, if it’s a static image or a simple sound sample.
    However, decoding complex content—such as short videos or abstract shapes—had not been successful until now.
    Now, for the first time in history, the content of short videos has been represented as text with high accuracy, purely by decoding the brain activity measured while watching the videos.

    How does it work?
    Based on Tomoyasu Horikawa’s research report, their team’s procedure generates descriptive text that reflects brain representations using semantic features computed by a deep language model.
    They built linear decoding models to translate brain activity induced by videos into the semantic features of the corresponding captions, then optimized the candidate descriptions by adjusting their features—through word replacement and interpolation—to match those decoded from the brain.
    This process resulted in well-structured descriptions that accurately capture the viewed content, even without relying on the canonical language network.
    The method was also generalizable to verbalizing content recalled from memory, thus functioning as an interpretive interface between mental representations and text, and simultaneously demonstrating the potential for nonverbal, thought-based brain-to-text communication, which could provide an alternative communication channel for individuals with language expression difficulties, such as those with aphasia.

    What was the big idea?
    Some previous attempts used artificial intelligence (AI) models for the whole process in one. While these models are capable of independently generating sentence structures—thus producing the text output—it is difficult to determine whether the output actually appeared in the brain or is merely the AI model’s interpretation.

    Here comes Horikawa, who splits the process into two stages and by the separation prevents the above mentioned problem from occurring.

    This is the big idea, in my opinion.

    Horikawa’s method is to first use a deep language AI model to analyze the text captions of more than 2,000 videos, turning each into a unique numerical “meaning signature. (Stage 1).

    Then a separate AI tool was then trained on the brain scans of six participants, learning to recognize the brain activity patterns that matched each meaning signature while the participants watched the videos (Stage 2).

    Tomoyasu Horikawa 2025.

    Clever, I like it.

    So, if you are uncertain about what depends on what and how in the system, make separations, introduce phases and the picture will become clearer! Especially if AI is also in play…

    References:

    https://www.scientificamerican.com/article/ai-decodes-visual-brain-activity-and-writes-captions-for-it/?utm_source=Klaviyo&utm_medium=email&utm_campaign=Technology+11%2F11%2F25&utm_term=AI+Decodes+Visual+Brain+Activity%E2%80%94And+Writes+Captions+for+It&_kx=aN264t3DeAXbCe6O6DCo9-cPyc433O4udOrwBNdqquA.WEer5A

    https://www.science.org/doi/10.1126/sciadv.adw1464#T1

    An AI vs AI Combat

    07/11/2025 by Gabor Priegl Leave a Comment

    MS Copilot – Priegl Gábor

    A new world is taking shape, the era of AI which is happen to be a huge business:

    • The global AI market is worth almost $300 billion.
    • The AI market is set to hit $1.77 trillion by 2032.

    https://explodingtopics.com/blog/ai-market-size-stats#projections

    The new hype is the agentic AI, that has made its way everywhere.

    What is it all about, briefly?

    Here is an AI summary about agentic AI:

    “Agentic AI refers to autonomous AI systems that can independently plan and execute tasks to achieve a goal with minimal human oversight. Unlike traditional AI that requires step-by-step guidance, agentic AI uses advanced reasoning, such as that from large language models (LLMs), to make decisions and adapt in real-time to complex problems. Examples include AI systems that can manage an employee’s vacation request or handle IT support tickets. 

    Key characteristics

    • Autonomy: Agentic AI operates independently, making decisions and taking actions without constant human direction.
    • Goal-oriented: These systems are designed to achieve specific, pre-determined goals.
    • Adaptability: Agentic AI can adapt in real-time and handle complex, multi-step problems that may not have been explicitly progr

    Important considerations

    • Maturity: The field is still developing, and it is important to distinguish genuinely agentic systems from those marketed as such (sometimes called “agent washing”).
    • Implementation: Successful deployment requires more than just AI, including robust engineering discipline, data management, and monitoring.
    • Governance: Robust governance and analytics are needed to ensure the AI operates in a controlled and safe manner.” 

    Everyone understands the simple case in the example (to manage an employee’s vacation): you have to organize a trip—with flights, accommodation, car rental, sightseeing, etc. – while considering factors like available budget, time frame, maximizing different target values, like value/cost efficiency, and so on. This case is relatively simple because a human formulates a task for the AI assistant, which then examines various scenarios, taking into account what is necessary, presents the feasible scenarios, offers options for decisions, iterates—sometimes multiple times—and then produces the version to be implemented and – finally – organizes everything.

    The case, however, that you can read about below in the MIT article, is very different and much more complex and exciting, and this will be the real challenge for all of us.

    In the following scenario, AI agents face each other. One has a purchasing task, the other a selling task. Narrowly speaking, the research is (only )about price negotiation, but the players have competed in different sales areas (selling-buying of electronics, motor vehicles, real estate), both as sellers and buyers.

    The details can be read in the referenced MIT article, which I have read several times.

    Here are the most important points highlighted:

    “… access to more advanced AI models —those with greater reasoning ability, better training data, and more parameters—could lead to consistently better financial deals, potentially widening the gap between people with greater resources and technical access and those without. If agent-to-agent interactions become the norm, disparities in AI capabilities could quietly deepen existing inequalities.“

    “Over time, this could create a digital divide where your financial outcomes are shaped less by your negotiating skill and more by the strength of your AI proxy,” says Jiaxin Pei, a postdoc researcher at Stanford University and one of the authors of the study.”

    “One notable pattern was that some agents often failed to close deals but effectively maximize profit in the sales they did make, while others completed more negotiations but settled for lower margins. GPT-4.1 and DeepSeek R1 struck the best balance, achieving both solid profits and high completion rates.

    Beyond financial losses, the researchers found that AI agents could get stuck in prolonged negotiation loops without reaching an agreement—or end talks prematurely, even when instructed to push for the best possible deal. Even the most capable models were prone to these failures. The result was very surprising to us,” says Pei. “We all believe LLMs are pretty good these days, but they can be untrustworthy in high-stakes scenarios.”

    “This study is part of a growing body of research warning about the risks of deploying AI agents in real-world financial decision-making. Earlier this month, a group of researchers from multiple universities argued that LLM agents should be evaluated primarily on the basis of their risk profiles, not just their peak performance.”

    For now, Pei advises consumers to treat AI shopping assistants as helpful tools—not stand-ins for humans in decision-making.

    All the details:

    https://www.technologyreview.com/2025/06/17/1118910/ai-price-negotiation/?utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_content=11-06-2025&mc_cid=c073fdedea&mc_eid=d55319adcd

    For now, this is where we stand.

    AI has enormous business potential.

    The development is ongoing, and it’s certain that even now, AI agents are already negotiating simpler procurement tasks (like purchasing A4 paper), but it’s clear that AI solutions will quickly move up the complexity scale.

    What comes next?

    For example, managing smaller operational human teams.

    Then comes the AI mid-management.

    And so on.

    I am not a pessimist, I am a realist because we all know what insatiable an appetite for profit Capital has.

    The Price of Lying

    28/09/2025 by Gabor Priegl Leave a Comment

    https://www.printedtoday.co.uk/

    For some reason, the expression in the title is the one that has become widespread in its form in the Hungarian language, but when it comes to organizational behavior of companies, the “Cost of dishonesty and deception” is more accurate.

    So, we are talking about costs in this article.

    The question, we are going to discuss here is: how much more expensive is it to manage an organization where team members do not have a clear picture of the company’s daily operations and instead of an honest, clear communication and behavior deception, concealment and lying prevail?

    I have not found any reference that provides a quantified answer to this question. I don’t think it can ever be set up a comprehensive, general model which would handle all the possible aspects and calculate the related extra costs, because every organization is unique.

    Instead of quantitative modelling several qualitative classifications have been set up. However, most of the extra cost generating factors can be identified with a little brainwork by each of us.

    For example, here is an excellent summary diagram from an article in the MITSloan Management Review SPRING 2004 VOL.45 NO.3 issue that deals with the topic:

    Robert B. Cialdini, Petia K. Petrova and Noah J. Goldstein: The Hidden Costs of Organizational Dishonesty

    Nice, but without any sophisticated model, solely based on our management experiences we can state that if blatant lies, white lies, or even just deception, concealment, obfuscation, misinterpretation, and unfulfilled promises appear in the life of an organization, the cost of operation increases.

    This is so, because the conspirators (if they do not want to fail quickly) must continuously maintain an updated “mental accounting” of when, to whom, and what they lied about, and be prepared to immediately resolve any related confrontations. That requires time and energy.

    The following writing focuses “only” on the hidden costs generated by “executive deception,” but it is definitely an authoritative piece of papeer because everyone can confirm that norms in an organization spread downward from the No.1 leader. The behavior pattern of “executive deception” propagates downward and sideways, infecting almost the entire organization. There will always be small islands, but they will only survive temporarily.

    https://quarterdeck.co.uk/articles/when-leadership-lies

    My conclusion: it is economically justified to develop a company based on clear and straightforward communication and behavior, as this is essential for the lowest-cost operation of the organization.

    I built such a company recently.

    It was great to work there.

    Call any MA-Coding colleagues if you want validation.

    The Seesaw

    11/09/2025 by Gabor Priegl Leave a Comment

    Photo from kertironkjatek.hu

    Anyone involved in custom software development experiences that, due to the project-oriented nature of the activity, there is rarely a state of equilibrium in terms of the company’s performance: there is either too much work or too little compared to the size of the team.

    It’s like the seesaw from our childhood playground. As long as children of similar weight were swinging, everything worked fine, but if there was a change at one end and a heavier child sat down, the child at the other end had to hold on tight to balance the seesaw or even stay on the swing and not fall offat all. And you remember, don’t you, when a daring boy jumped onto the middle of the seesaw, one foot on one arm of the seesaw, the other on the other, and helped maintain balance and rhythm (or quite the opposite, depending on his intention:))?

    If we have the image of the playground seesaw in front of us, let’s see what all is in this metaphor!

    Let’s take a simple, ultra-flat software development company. It starts with around ten people, aiming to ensure healthy, sustainable growth year after year, in revenue and EBITDA.

    Our seesaw has an order backlog roughly sized for the team, consisting of a few projects on which the developers are diligently working.

    Because they work excellently, their good reputation spreads, and the company receives new orders. The expected workload increases, so the CEO in the middle of the seesaw helps restore balance by hiring more developers for the company.

    But only cautiously, because increasing the team size will temporarily reduce the developers’ workload, negatively impacting EBITDA: the seesaw is a delicate instrument, and it must be handled carefully!

    Then, of course, once the team expansion is done, the seesaw tips, and the CEO focuses on increasing the order backlog.

    And if the team is skillful, they play with the seesaw continuously and beautifully, with relatively small amplitudes, and the company grows.

    This is a sensitive system, and moreover, as the company grows – let’s imagine it this way – the arms of the seesaw become longer (to accommodate the employees on one side and the projects on the other), the model’s sensitivity to one unit of intervention increases, the swings can become larger, meaning the business risk increases.

    After years of continuous corporate growth, due to the elevated risk level, a clumsy intervention on any arm of the seesaw can have serious negative consequences.

    However, without rough external interventions, with careful management within the equilibrium range, the team achieves significant (financial) results and professional successes.

    Closing Thought

    The seesaw metaphor above is a system; every element is important: the development team as a whole and individually every member of the team, the flat organization, regular, honest, to-the-point communication, the CEO.

    If anyone is still curious about the “essence” of the above, “what makes” the whole thing, they should read István Örkény’s writing: The Meaning of Life and meditate longer on the topic.

    AI – a personal story

    02/09/2025 by Gabor Priegl Leave a Comment

    Pict from Fruitnet

    Many yeas ago, in 1990, I happened to be inching along toward the end of my 8th semester at the Technical University Budapest when I recognized that I hadn’t found any aha experience during those years with the academia.  

    Then, even to my surprise, I chose a strange, maybe „outlier” course: economic psychology. It covered a wide range of interesting topics and offered us a short immersion into the world of neural networks too. In 1990!

    The neural networks topic presented one of the strangest questions that in its simplicity caught my attention while studying: what makes an apple an apple, what makes a pear a pear?

    Simple enough, isn’t it?

    We all know that it is not a question that can be answered by defining descriptors and value ranges of them. Color, shape, taste, surface… you can continue with the listing of different dimensions trying to depict an apple. Even if you were able to define which dimensions count (and which don’t) the number of appropriate combinations of the suitable values would exceed human capacities. And we would always find a counterexample.

    Somehow, we all have the feeling that there should be another way to answer the question: what makes an apple an apple, what makes a pear a pear.

    And indeed. As little children we also learned by examples what is an apple, what is a pear.

    We were taught to be able to recognize the apples contra the pears. It took us a while but eventually we reached – almost unperceived – a point where we were able to decide whether a given fruit was an apple or a pear.

    Having recognized, that there are problems where teaching and not the descriptor-based, rule-driven process is the appropriate solution, was a revelation to me.

    And that is what draws me to AI, machine learning and neural networks.

    Why are you interested in the world of AI?

    Write me about it, please.

    G.

    AI, what it is and what it isn’t

    24/09/2023 by Gabor Priegl Leave a Comment

    Copyright UNNews

    Let’s cut through the chaos and get the basics first about AI.

    If you follow the developments in the world of technology you couldn’t avoid the breakthrough news a couple of months ago.
    The demonstrated capabilities of an artificial intelligence (AI) application (ChatGPT developed by OpenAI) has reached the stimulus-threshold even of the average IT users. This is a generative AI application which creates, human-made-like text and images instantly.
    “Hey, it’s quite like if I talked with a fellow human regarding the style, the speed of reaction and the wide area of topics that can be discussed – that’s the surprising experience of the users testing ChatGPT.”

    This particular application has thrusted AI into the limelight.

    Everyone is excited about the potentials of AI. Business leaders, investors and a wide range of users feel the buzz without knowing what it is all about. And that uncertainty, that undefinedness of the topic itself may amplify the interest and helps to fan fantasies about artificial intelligence.

    Because, yes, there is no widely accepted, settled single definition of AI.

    It means different things to different people, plus it has been hyped these days and everyone is trying to apply AI as an eye-catching label for their product, solution, concept.

    Despite this, it is not hopeless, by organizing our thoughts and systematizing the pieces of information to develop an AI framework for ourselves.

    Even if there isn’t one definition we still can have a look at one of the most well-known and complex examples of AI in order to identify the different aspects and dimensions of it.

    The overwhelming majority of people associate self-driving cars with AI, so this seems to be a good starting point. What are the main elements here? Complex, real, multi-actor and fast changing environment where the machine collects the ever changing status of items, runs analyses and based on them makes decisions in real time.


    A couple of things are obvious in the example:

    • the application generates a real time customer experience we have never witnessed before,
    • it utilizes (relatively) new technological fields like computer vision, breakthroughs in search and find methods, pattern recognition functions, decision making processes in higher levels of uncertainty,
    • and the machine imitates human behavior quite well.

    But this is only scratching the surface.
    What really matters here are: autonomy and adaptivity. These make the difference.

    Now, we have taken a good example. Let’s try to summarize the findings and provide some outlines.

    So, what is AI?

    1: AI is a discipline, part of Computer Science, heavily overlapping with Data Science. AI has some essential and signature sub-areas like Machine Learning and Deep Learning. Aim of AI as a discipline to research, develop, formulate, measure math models that can be applied in computer systems in order to solve complex problems in complex and changing environments, where the computer systems work in an (as far as possible) autonomous and adaptive mode of operation.

    2: AI is an approach and a (rapidly developing) portfolio of problem-solving methods based on the section above. The final evaluation and acceptance of the results produced by the method come from humans so humans’ decision of what right or wrong is still determining.

    3: AI is a concrete realization of a problem solving solution. Focus is on Autonomy and Adaptivity. In case of the latter the development of the solution requires external reference for improving the model(s) and training data sets and shaping the quality of outcomes all the time.

    I hope this article helped you clarify what AI is.

    Any thoughts and remarks are welcome.

    Any thoughts and remarks are welcome.

    G.

    • « Previous Page
    • Page 1
    • Page 2
    • Page 3
    • Page 4
    • Page 5
    • Next Page »

    © 2017