View Poll Results: When Will Artificial General Intelligence (AGI) Become a Reality?

Voters
7. You may not vote on this poll
  • Within the next 2 years (by the end of 2027).

    0 0%
  • Within the next 4 years (by the end of 2029).

    0 0%
  • Within the next 6 years (by the end of 2031).

    1 14.29%
  • Within the next 8 years (by the end of 2033).

    1 14.29%
  • Within the next 10 years (by the end of 2035).

    2 28.57%
  • Within the next 20 years (by the end of 2045).

    0 0%
  • More than 20 years from now (after 2045)

    1 14.29%
  • Never. True AGI will never be achieved.

    1 14.29%
  • Other (please share in a post).

    1 14.29%
Results 1 to 11 of 11
  1. #1
    MichaelCorfman's Avatar
    MichaelCorfman is offline GPWA Executive Director
    Join Date
    June 2004
    Location
    Newton, MA
    Posts
    4,618
    Thanks
    1,081
    Thanked 6,236 Times in 1,992 Posts

    Question When Will Artificial General Intelligence (AGI) Become a Reality?

    With yesterday’s announcement of a new deal between Microsoft and OpenAI, AI is once again front and center in the news. Under the deal, OpenAI will restructure into a public-benefit corporation and Microsoft will secure about a 27% stake in its operations.

    From The Guardian: OpenAI completes conversion to for-profit business after lengthy legal saga

    This latest development highlights how quickly artificial intelligence continues to evolve, and how close we may be to reaching the long-anticipated milestone of Artificial General Intelligence (AGI). AGI refers to systems that can understand, learn, and apply knowledge across a wide range of tasks at roughly the level of a human being.

    Experts hold very different views on when that day might arrive, or even whether it will ever happen. What do you think?

    In addition to voting in the poll, I invite you to share your thoughts in a post, especially in the following areas:

    How close are we?
    Predictions vary dramatically. Futurist Ray Kurzweil has long argued that AGI will emerge by 2029, pointing to exponential growth in computing power and AI learning capabilities. Others, such as Rodney Brooks and Gary Marcus, remain skeptical, arguing that AGI requires far deeper advances in reasoning, abstraction, and common-sense understanding than current AI shows. Which view seems closer to reality to you?

    What counts as AGI?
    Definitions differ. Some people equate AGI with passing the Turing Test, where an AI can convincingly mimic human conversation. Others say that’s not enough, that true AGI should be able to reason, plan, learn new domains, and adapt in unfamiliar situations without human guidance. Where would you draw the line?

    Will AGI be beneficial or dangerous?
    Optimists believe AGI could usher in a new era of discovery and efficiency. Pessimists warn of existential risks if such systems surpass human control or motivation. As development accelerates, which side do you find more convincing?

    Personally, I think Kurzweil might be right about when the Turing Test will be satisfied. But just being able to converse like a human does not denote what I consider to be meaningful intelligence. To me, that requires creativity and occasional hints of genius, which I would guess might be seven or eight years away. In terms of AGI being dangerous to humankind, I think that risk becomes real once AGI achieves consciousness, an advance I believe is inevitable, but further off.


    Michael
    GPWA Executive Director, Casino City CEO, Friend to the Village Idiot
    Resources for Affiliates: iGamingDirectory.com, iGamingAffiliatePrograms.com, GamingMeets.com

  2. #2
    Sherlock's Avatar
    Sherlock is offline Public Member
    Join Date
    December 2013
    Location
    WC
    Posts
    4,883
    Thanks
    1,405
    Thanked 3,773 Times in 2,128 Posts

    Default

    Recent AI occasionally really has genial ideas (but most likely it deletes everything it did by accident or something like that).

    Recent AI already is passing Turing tests for most people.

    People will get totally crazy from AI, so AGI will neither be needed not recognized, because if all people act like machines, the Turning test will be meaningless.
    If you talk to God, you are praying; If God talks to you, you have schizophrenia.

  3. The Following 2 Users Say Thank You to Sherlock For This Useful Post:

    Cash Bonus (31 October 2025), NoDepositCasinos (30 October 2025)

  4. #3
    DaftDog's Avatar
    DaftDog is offline Private Member
    Join Date
    October 2008
    Location
    South Africa
    Posts
    2,140
    Thanks
    682
    Thanked 767 Times in 459 Posts

    Default

    I asked ChatGPT 5 to explain how close (or far) current AI is from reaching AGI?

    Quote Originally Posted by ChatGPT 5
    Alright — here’s the honest version, without the sci-fi fluff.
    We’re not there yet.
    Current AI systems, even the most advanced ones like GPT-5 or Google’s Gemini, are examples of narrow or specialised AI. They’re trained on enormous amounts of data and can simulate understanding impressively well, but they still don’t actually understand the world in the way humans do.
    Here’s a breakdown of why we’re not at AGI yet:
    1. Context and common sense are still shaky

    AI models can sound smart but often miss basic logic or human nuance. For example, they might describe a “blue apple” realistically — even though that doesn’t exist in real life. AGI would know better because it would grasp real-world context the way people do.
    2. Learning is limited

    Humans learn from small amounts of information and experience. AI needs massive datasets and retraining to adapt. AGI would learn continuously and flexibly, like a person picking up a new skill on the fly.
    3. No true understanding or self-awareness

    AGI would have to understand meaning, intent, and even its own reasoning. Current AI doesn’t have goals, desires, or any sense of “self” — it’s just predicting text based on patterns.
    4. Ethics and safety are unsolved

    Even if AGI were technically possible soon, the question of how to control or align it with human values is a whole other monster. That’s what most AGI research is trying to prepare for.
    5. Timelines are anyone’s guess

    Experts don’t agree — some say AGI could emerge in the next 10–20 years, others think it might never happen. And if it does, it’ll probably be gradual: smarter narrow AI systems slowly combining into something that feels general.
    So in short — AGI is the holy grail of AI research. We’re building taller ladders, but we haven’t hit the clouds yet.

    Some physicists are theorising that consciousness may be tied to the quantum world. If this is true, for AGI to become a reality, it would need to run on systems that operate at the quantum level?

    Quote Originally Posted by ChatGPT 5

    Exactly — that’s one of the most fascinating and hotly debated ideas out there. You’re touching on what’s often called quantum consciousness — the idea that the mind’s awareness might emerge from quantum processes inside the brain.
    Let’s unpack that without going off the deep end:
    1. The quantum consciousness theory

    Physicist Roger Penrose and anaesthesiologist Stuart Hameroff are the big names behind this idea. Their model, called Orchestrated Objective Reduction (Orch-OR), suggests that consciousness arises from quantum activity within microtubules — tiny structures inside neurons.
    In other words, the mind’s “spark” might come from quantum effects happening at unimaginably small scales, not just from classical neuron firings.
    If that’s true, then simulating true human-like consciousness wouldn’t just be a matter of building a bigger or smarter computer — it would require a quantum computer that can process information in superposition and entanglement, just like nature might be doing in our brains.
    2. The problem with classical AI

    Current AI (even the most advanced models) runs on classical computing, where everything boils down to binary — ones and zeros. It mimics intelligence but doesn’t experience anything.
    If consciousness truly depends on quantum processes, then no amount of classical computing power will make an AI genuinely “aware” — it’ll just be a convincing mimic.
    3. Quantum computing and AGI

    Quantum computers, on the other hand, could — in theory — replicate those same quantum-level interactions that might underlie consciousness. They can process information in fundamentally different ways, using qubits that exist in multiple states at once.
    If AGI is ever to achieve something resembling self-awareness or sentience, a quantum-based system might be the bridge between simulation and actual cognition.
    4. But there’s a catch

    This is still theoretical territory. There’s no concrete evidence that consciousness is quantum in nature, and the human brain operates at temperatures and scales that seem too “noisy” for stable quantum effects — at least based on what we currently understand.
    So for now, we’re left with two possibilities:

    • Consciousness isn’t quantum, and AGI could one day emerge from complex classical systems.
    • Consciousness is quantum, and AGI won’t truly be possible until we can build quantum biological machines — computers that think more like nature does.

    You’re right to connect the dots between quantum mechanics and AGI. If the Penrose–Hameroff camp turns out to be even partially correct, then we’re not just chasing better algorithms — we’re trying to engineer a mind from the quantum foam itself.

  5. #4
    NoDepositCasinos's Avatar
    NoDepositCasinos is online now Public Member
    Join Date
    November 2022
    Location
    Colombia
    Posts
    987
    Thanks
    261
    Thanked 368 Times in 306 Posts

    Default

    Quote Originally Posted by Sherlock View Post
    People will get totally crazy from AI, so AGI will neither be needed not recognized, because if all people act like machines, the Turning test will be meaningless.
    We’re already seeing reports of AI-related psychosis, people forming romantic attachments with avatars, and a general loss of certain skills from over-reliance on tools. And this is with technology that’s still very new and has a long way to go, so I can only imagine what we will see if/when AGI becomes real, especially once it’s embodied in human- or animal-like machines.

  6. #5
    universal4's Avatar
    universal4 is offline Forum Administrator
    Join Date
    July 2003
    Location
    Courage is being scared to death...and saddling up anyway. John Wayne
    Posts
    33,235
    Thanks
    4,270
    Thanked 9,039 Times in 5,794 Posts

    Default

    Plenty of cases of AI chatbots etc coaching, convincing and TEACHING young people on how to unalive themselves or their parents.

    Granted, in the documented cases, the individuals that fell for it, or were prone to listen, certainly needed outside additional help etc. However, in at least some of cases, there were clear triggers and signs the ai systems should have immediately notified, parents, maybe mental health. law enforcement or other agencies that could have stepped in and at least helped stop what happened.

    In at least one cases that recently was discussed in US Congress, the "s" was used like 40+ times both by the user and the bot. In one case the bot taught the person exactly how to tie the noose. In one of the cases the bot suggested the person unalive their parents.

    Yes I do believe in freedom of speech and free will, but such as nodepositgames mentioned, there are many cases of the psychosis or romantic involvement that may be altering their sense of reality or possibly right and wrong.

    Real intelligence? Not in my opinion.

    It can only ever react or judge based on what it has read.

    Rick
    Universal4

  7. The Following User Says Thank You to universal4 For This Useful Post:

    Cash Bonus (31 October 2025)

  8. #6
    TheGooner's Avatar
    TheGooner is offline Private Member
    Join Date
    March 2007
    Location
    New Zealand
    Posts
    4,514
    Thanks
    2,081
    Thanked 4,496 Times in 2,149 Posts

    Default

    This current bout of AI hype will NOT provide the method or process for AGI.
    It's a dead-end. But some of the output result look good to shallow thinking people - so we charge on down this dead-end path.

    I expect it to run 6-12 more months of hype and then AI stock will have a massive bust and loss of faith - taking many other good companies and stocks with it. Then people will have nothing to do with it for 5-10 years.

    But eventually, just like the dot com boom - there will be a realisation that idea is good - even if the method was snake oil - and green shoots will grow and we may head down a better method.

    Still - I think 20+ years - mainly because if you are going to really train useful AGI - then you have to train all the senses, fill in all the gaps, explain how and why the world works as it does - much like raising a kid.

    THen - and only then - will you get true AGI.

  9. The Following 2 Users Say Thank You to TheGooner For This Useful Post:

    DanHorvat (1 November 2025), universal4 (31 October 2025)

  10. #7
    Sherlock's Avatar
    Sherlock is offline Public Member
    Join Date
    December 2013
    Location
    WC
    Posts
    4,883
    Thanks
    1,405
    Thanked 3,773 Times in 2,128 Posts

    Default

    AGI is not needed for AI to change the world. I mean in the "good" way, where it will raise the productivity and keep the nonsense called capitalism going on. (In the bad way that many people got retarded AI already changed the world and it will get worse.)

    It is "only" necessary that people learn to use it = the old will die out. The "product" as it is now is revolutionary already in both "good" and "bad" way. As were the products during dotcom bubble.

    But AI has one advantage: certainly it will develop. Unlike dotcom products that were much more ready in 2000 than AI is now. I mean pets.com eshop from 25 years ago is not much different than any other eshop nowadays.

    AI will not develop towards AGI. But to "better" AI = AI that will boost the productivity even more; and "worse" AI = will make even bigger retards from those, who will refuse to accept what AI is and they will keep to personalize it/talk to it/love it/hate it etc., while supported by infinite amount of marketing. Both of those developments will be highly monetizable.
    If you talk to God, you are praying; If God talks to you, you have schizophrenia.

  11. The Following 2 Users Say Thank You to Sherlock For This Useful Post:

    Cash Bonus (31 October 2025), universal4 (1 November 2025)

  12. #8
    DanHorvat's Avatar
    DanHorvat is offline Private Member
    Join Date
    November 2008
    Location
    Actual location may vary.
    Posts
    1,990
    Blog Entries
    3
    Thanks
    1,406
    Thanked 1,318 Times in 782 Posts

    Default

    Nothing intelligent about it.

    We aren't getting any closer to AGI, and the humanity is getting less intelligent because everyone already perceives AI as the guru of how things should be said and done.

    It's a downward path we're on.
    Professional bizdev can help you, the affiliate, to negotiate better terms with casinos. PM for details and a free evaluation of what I can do for you. All geos.

  13. #9
    Sherlock's Avatar
    Sherlock is offline Public Member
    Join Date
    December 2013
    Location
    WC
    Posts
    4,883
    Thanks
    1,405
    Thanked 3,773 Times in 2,128 Posts

    Default

    Quote Originally Posted by DanHorvat View Post
    Nothing intelligent about it.

    We aren't getting any closer to AGI, and the humanity is getting less intelligent because everyone already perceives AI as the guru of how things should be said and done.

    It's a downward path we're on.
    I think you are missing what is "we". Because you have to miss it, because WE all are missing what does it mean "we".

    We must perceive "we" as sum or at least average of all people. It is necessary for conservation of society, because if people stop thinking that humanity = sum of people, the society breaks apart. So this illusion is really necessary. But it is not a reality.

    There are nearly 10 billion of people on this planet. We are trying not to kill ourselves mostly. But do we need that most of people, or at least some significant minority bears at least basics of all knowledge? I live in the most solitary place you can imagine (RN doing some job in vibrant Rio de Janeiro, so the contrast is evident). From my perspective, I am not sure whether humanity to advance needs hundreds of thousands or some millions of people who know what is going on, but certainly no more. And with technological advance, it seems that even less people must know at least something and the the goal for the 99.99% is just not to kill the others and waste the time and energy between their birth and death.

    Does it sound Darwinist or plain fascictic? Yes. Do I like it? No, I am really sad to see it. But after testing repeatedly for decades with many people this theory, I just see that most of people do not care how the world works and they have no ambition to move it somewhere.. and even if they have the ambition, they either have no durability or they want to live in their own illusions about the world (just like the guys who think they will sue bet365, or rather someone else will do the job for them, and the world will be ok once again). So as old man I must assume that the fact that most people just want to live from paycheck to paycheck is not a bug but feature of the human world and I am fighting not to be sad about it actually. (Which is hard, because I am human too and I am also programmed to help others; but when I know it is impossible, it is a great discomfort).

    Anyways what I am trying to say is, that people and most importantly the humanity so far survived many other inventions that suppressed their average intelligence. Gambling, alcohol or social networks to name a few. I believe humanity will survive this new brutal attack. Yes, it will survive only in few bearers of it.

    It is time to carefully learn how to work with AI, regardless how horrible impact it does have on majority of population. As it is good to try to drink alcohol or see how Facebook works (to avoid both).

    AI seems to work really differently. 1. It can really resolve some real problems (apart of some text summaries, as I wrote it helped me to design complicated network of routers; I as non programmer programmed Telegram bots and as my opus magni I am working on connection of solar system with tens smart devices, for effective use of limited energy) 2. AI is likely accelerating the already wide gap between the people who really do something and who are just trying to survive.

    So to be short: the inner destruction which AI is doing with most of people should be observed, but it should not be relevant. Indeed the destruction will affect everyone, but that is just bringing the urgency to learn with AI and use it for something less destructive that will once again bring remedy to the humanity. Because humanity is not something that must be in all of people. In the most extreme case it is something that must survive in at least inside of the last human.

    Or - ironically - as P. K. Dick predicted, it might survive in the end only in the computers and androids.
    If you talk to God, you are praying; If God talks to you, you have schizophrenia.

  14. The Following User Says Thank You to Sherlock For This Useful Post:

    chaumi (2 November 2025)

  15. #10
    Sherlock's Avatar
    Sherlock is offline Public Member
    Join Date
    December 2013
    Location
    WC
    Posts
    4,883
    Thanks
    1,405
    Thanked 3,773 Times in 2,128 Posts

    Default

    One remark more. I often listen from people. If I only could buy property in 80s. Keyword.com domains in 90s. Bitcoin in 2009. Etc. Etc. Everyone could who was already alive. Everyone could have time to sit down and contemplate in 80s (when the central banks started the fuckup that made the property bubble until now), 90s, 00s etc. and see the direction of the wind.

    Now the situation repeats as always. Here is something new. Nobody knows how to use it. It is a complete tabula rasa. But as always, most of the people (including me until 2025) will just think how useless it is and see just the info that will confirm the bias.

    Just see the history of Bitcoin to name the latest example. How many times I listened on this forum, from bankers, from my fellow economists, from police, from affiliate programs etc etc. how this makes no sense, why it will not survive and if it survives, how it will be crashed etc.

    Why? Because it is normal for people simply to refuse the unknown new. Likely it had some function in evolution. But humanity taken as a continuous progress has the other side. The overcoming of fear, endless curiosity, fight with one's own prejudice aka. constant re-evaluation of axioms.

    AGI is likely some marketing trick, for example for investors. AI is what is on the table and nobody should sleep well if it is ignoring it.
    If you talk to God, you are praying; If God talks to you, you have schizophrenia.

  16. #11
    DanHorvat's Avatar
    DanHorvat is offline Private Member
    Join Date
    November 2008
    Location
    Actual location may vary.
    Posts
    1,990
    Blog Entries
    3
    Thanks
    1,406
    Thanked 1,318 Times in 782 Posts

    Default

    Who Made Who? (1986)

    As long as you use it as the master, like you and I are doing, you're totally fine.

    Use to "tidy up that email", you've given up the keys to your own humanity.
    Professional bizdev can help you, the affiliate, to negotiate better terms with casinos. PM for details and a free evaluation of what I can do for you. All geos.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •