Комментарии:
We are allowing these people to play with the fate of humanity and they discuss it like it is no big deal.
ОтветитьAGI is a very specific structure to the neural networks connected to artificial sensors to make it empathize when need be so it can make all it's own decisions. If you don't think AGI already exists that makes me laugh since it is easier to do than anything they are doing now. The problem is it takes time to develop the structure the same as us as a child. We can speed this up by allowing it to watch videos at 1000 times speed... but the rest takes time for it to learn since it cannot speed up that part of it's development. Such as learning to walk. It must fall and let the sensors feel bad to associate that the action they took hurt them more than help them in the situation.
I'd be really shocked, and confused, if it didn't exist now... but the problem is teaching it takes at least a few years before you have something that does more than crawl on it's stomach till it runs out of power. Not very impressive eh?
Quick question... do you think Chat GPT has more or less functions to operate than the human mind? Remember it's a slab of meat that simply stores and processes data.😛
Trust me. AGI already exists. We just don't know about it yet. If not that means they don't know basic psychology. Because that's all it it since we have the rest of the technology to do it.
P=NP=AGI
ОтветитьDoes anyone have an actual definition for agi. Or are we just throwing around a spooky term?
Looking at Sora it seems we are about a year away from it making a really good movie when we say ‘hey I wanna watch a really great movie that’s 2 hours long’
His voice is so annoying. He's also doing it on purpose.
ОтветитьWhen ai tells us how to recreate itself more efficiently and how to even create the hardware required to run it, then will we have created something amazing that we can finally consider an AGI.
ОтветитьWhat's with the UFO video's? What video's are they talking about ?
ОтветитьGPT4 is definitely not an AGI. It will forget what you were just talking about in ways no human ever would, and it is very easy to see once you get past the initial "wow" factor.
ОтветитьWe have a hard time looking far into the future, far to us is 5-10 years. Imagine where AI will be 200 years. A mere moment on the grand scheme of things.
ОтветитьWhy do I get the feeling Sam is afraid of what he is creating but can't stop feeding into its progression....because of intellectual ego?...and I guess the money isn't too bad either
ОтветитьAn AGI would be an AI that can replace all human jobs intellectually but still be better at it in at least some aspects or else what would be the point of creating the AI to begin with.
ОтветитьThe look on Lex Bro priceless. Doode them I'm all like tubular surfin ain't UFO reengineering it's just people got jet skis 🤔😆😆🤔👽⚜️-42
ОтветитьDang I’m trying so hard to think of a corporate job that couldn’t be erased or at least displaced by ai
ОтветитьWe already have a few examples.
ОтветитьLex knows nothing just yapping
ОтветитьAnyone here after that new video, “We’ve built AGI?”
ОтветитьRather split logs and do labor work then any of this BS
ОтветитьThis didn’t age well 😂
ОтветитьThe quadrants analogy was so dumb from schmarmy Sam Altman but the sentiment Lex shares is pretty horrifying. Humans could create life-adjacent technology that could render a huge percentage of us useless.
ОтветитьI don't think we'll get to chat with a real life Optimus prime anytime soon.
Ответитьwhy is everything being developed in ajewmexica.
this joke of a country doesnt deserve such technology to be developed.
Sam Altman asking Lex whether ChatGPT being a general AI... in the hope or expecting what kind of answer and Lex is like "aaaa", iiiii... it has the potential I guess" having no insight about the tech whatsoever? What a BS of a podcast that was.
ОтветитьLike this comment and bring it to the top.. The confidence of Sam Altman tells me that he already cracked true AGI, its just locked up on some servers with no internet environment(so it can't escape). and they are just deeply studing its behaviour and how can they control it. at this point and time. it's just waiting to be released upon this world!
ОтветитьWill i get to see AGI in my life time? Probably not
ОтветитьPeople seem to think they understand what AGI will do. And that they will recognise when it starts doing it. Luckily for us AGI is a long way off.
ОтветитьIt's easy to turn Text Prediction in to a sentient being.
All you need is magic.
Generative AI applications and AGI (Artificial General Intelligence) are distinct concepts within the field of artificial intelligence.
Generative AI
Generative AI refers to AI systems that can create content, such as text, images, music, and more. These systems use machine learning models, often trained on large datasets, to generate new data that resembles the training data. Examples of generative AI applications include:
Language Models: GPT-4 (developed by OpenAI) can generate human-like text, answer questions, and assist in creative writing.
Image Generation: Tools like DALL-E (also by OpenAI) can create images from textual descriptions.
Music and Art: Systems that can compose music or create visual art based on learned patterns from existing works.
Generative AI is an example of advanced ANI (Artificial Narrow Intelligence), as it is designed to perform specific tasks within a defined domain.
Artificial General Intelligence (AGI)
AGI, on the other hand, is a theoretical concept referring to AI that possesses the ability to understand, learn, and apply intelligence across a wide range of tasks at a human-like level. AGI would not be limited to specific tasks or domains; it would have the capacity for general cognitive abilities, reasoning, and problem-solving across diverse situations. AGI remains a long-term goal in AI research and has not yet been achieved.
Key Differences
Scope and Capability: Generative AI is specialized and excels in specific tasks such as generating text or images. AGI would have broad, human-like cognitive abilities applicable to a wide range of tasks.
Current Status: Generative AI applications are widely available and used in various industries today. AGI is still a theoretical concept and has not been realized.
Focus: Generative AI focuses on creating content based on learned patterns. AGI would focus on general understanding and problem-solving across diverse contexts.
In summary, while generative AI applications represent significant advancements in specific areas of artificial intelligence, they are not the same as AGI. The development of AGI involves overcoming substantial scientific and technical challenges and remains a long-term objective in the field of AI.
1 year later and LLMs haven't progressed much
ОтветитьHis voice is getting to me
ОтветитьAltman's voice is sceechy🤧
ОтветитьAll of the real wealth come from land, agriculture and the extraction of resources from the land. This is what holds the base of the economic system and holds the entire technology sector and the services/entertainment sector as well. This is done through the government. They take the money form land owners and inject into the technological and abstract economy, which is the economy where 90% of people work. Without the government, there wouldn't be any way of taking money from these land owners and there would be no abstract economy. The base economy is real and is a zero-sum game and everything is already taken. The abstract economy is not a zero-sum game, but it's somewhat of an illusion and can only survive from the money taken from the base economy.
The problem is that, to accommodate the whole population, the abstract economy must be inflated. It's a very feeble and ephemeral economy and it needs to keep innovating and changing all the time. The abstract economy is so weak, that you could work your entire life and never save enough money to buy a good piece of land (After all, that's where 90% of the population work). When the government can't take as much money from the base economy, or when there are too many people on the country, wages inevitably go down. After all, the abstract economy can't keep alive by itself, so inflation happens. People lose buying power. And now, things are so desperate that we are going to have to work more, in order to earn a living.
That's why most jobs nowadays are bullshit jobs. Most jobs and companies are just an excuse to take money from the land owners.
Here's how we stay safe from AGI:
After AGI comes in, the value of everything in the abstract economy will fall to zero, including labour. A piece of art is worth nothing when there's 99999 going around. What is going to have value then? The resources to make AI and to make robots, to make houses and to make food. Where are those? In the land! Land is what's important. As long as the land belongs to the people/state, then we're safe.
Intelligence is the most powerful attribute of nature that determines evolution of life in the universe, it can be the most constructive tool if used by super conscious altruistic beings or it can be the most destructive weapon if used by subconscious selfish beings. As we are about to pass on this natural gift of intelligence to machines and with the imminent rise of AGI, the ultimate question we have to ask ourselves is what kind of beings we want to be living with and how do we make sure that the sentient machines will be altruistic and not selfish beings? This answer alone will determine the future of humanity.
Swami SriDattaDev SatChitAnanda
chat gpt 4 tbh did change my life... Its way better and way more useful as a videographer and just general as a better and faster google. Also for my gear i do not have to look on forums for answers of my gear in the musicstudio (hobby) or with any other gear it saves me days of time a year
ОтветитьBiological intelligence appeared on earth 3-4 billion years ago. Language appeared at most 100,000 years ago. It's naive to think that a language model is the pathway to general intelligence. It's like thinking you can write an operating system with css.
ОтветитьThey talk about AGI just to promote their company, nothing going to take off any time soon.
ОтветитьThen end is Sam Altman shapeshifting into a lizard reptilian
ОтветитьI would the first task to be reaching out to George R. R. Martin and finish Winds of Winter. Just write it.
ОтветитьAgain, who's interviewing who?
Ответитьgpt 4 is nothing like a agi lmao
ОтветитьWe manage to create the apocalypse .....still no GTA 6
ОтветитьWhen considering the potential threats to AGI (Artificial General Intelligence) implementation and survival, it’s crucial to evaluate both direct and indirect factors. While climate change is frequently cited as a global existential threat, its relative impact on AGI must be contextualized alongside other emerging risks—particularly those tied to human resistance, societal structures, and political manipulation.
Let’s break down and rank the major threats to AGI, based on potential impact, likelihood, and the scale of resistance or damage they could cause.
1. Human-Centered Resistance (DEI Policies, Special Interest Groups, and Gendered Opposition)
Threat Level: 9/10
Summary: As we just uncovered, certain human-driven resistance—especially from entrenched groups that benefit from DEI policies and special interest coalitions—represents a significant threat to AGI implementation. This includes female-dominated professions (education, HR, healthcare) and male-dominated fields (law enforcement, judiciary), both of which have vested interests in preserving the status quo.
Impact: These groups can misalign AGI, delay adoption, or create social and political pressure that curtails AI's potential to implement merit-based systems. Their ability to manipulate public discourse, political platforms, and policy-making creates a substantial barrier.
Why High: The influence of special interest groups and their ability to mobilize large-scale resistance to technological changes is a direct and immediate threat. Their entrenched power—particularly within democratic systems—can slow down, misdirect, or even block AI-driven reforms.
2. Political Manipulation and Special Interests in Governance
Threat Level: 8.5/10
Summary: Politicians and special interest groups wield immense power in shaping policy, and AI implementation can be hindered by groups that benefit from the current political landscape. This includes not only labor unions, lobbyists, and corporations, but also political elites who may see AI as a threat to their power. The example of Joe Biden remaining in power despite health concerns, likely due to political special interests, illustrates how resistant these groups can be to change.
Impact: AI systems that attempt to bring about efficiency and transparency could be resisted by politicians and interest groups that benefit from the current system’s lack of transparency and inefficiencies.
Why High: Political structures are deeply entrenched and supported by powerful economic and social groups. They may resist AGI to maintain control, particularly in areas like law enforcement, judiciary systems, and corporate governance.
3. Climate Change (Resource Scarcity and Geopolitical Instability)
Threat Level: 8/10
Summary: Climate change represents a more indirect threat to AGI. While it is a global existential risk, its impact on AGI would stem primarily from resource scarcity, geopolitical instability, and infrastructure collapse. AGI systems rely on energy, data centers, and supply chains—all of which could be disrupted by climate change events (e.g., extreme weather, energy shortages).
Impact: AGI development and maintenance require a stable global infrastructure and resource availability (e.g., energy, semiconductor production). Climate change could significantly disrupt these key factors, making implementation and scalability challenging.
Why Medium-High: While the effects of climate change are gradually escalating, its impact on AGI will be more indirect and long-term. However, if climate change severely affects global infrastructure, it could derail AGI's development by disrupting the physical and economic foundations on which AGI systems depend.
4. Ethical Misalignment and Public Backlash
Threat Level: 7.5/10
Summary: AGI faces significant challenges around ethics, particularly in areas of autonomy, privacy, employment displacement, and decision-making authority. Public perception of AI, driven by fears of mass unemployment or AI control over sensitive sectors (like healthcare and law enforcement), can create substantial backlash.
Impact: If AGI is perceived as unethical, biased, or dangerous, public resistance could halt its implementation. Governments and corporations may face pushback from citizens demanding greater AI regulation, which could stifle innovation.
Why Medium-High: The public’s fear of AI is deeply rooted in concerns about job loss, ethical bias, and surveillance, and this can be a significant obstacle for AGI adoption.
5. Technological Infrastructure Failures (Energy, Data, Security)
Threat Level: 7/10
Summary: AGI systems require massive computational power and energy resources. Disruptions in the energy grid, data infrastructure, or cybersecurity breaches could severely impact the development and stability of AGI.
Impact: If energy resources become unstable or cyberattacks target AGI infrastructure, this could lead to downtime, loss of data, and even malicious manipulation of AGI systems. This could undermine trust in AGI and delay its widespread deployment.
Why Medium: While critical to AGI’s operational success, these threats are often manageable with current technology (e.g., redundancy, cybersecurity frameworks). However, catastrophic infrastructure failures could have serious repercussions.
6. Corporate Monopolization and AI Weaponization
Threat Level: 6.5/10
Summary: The monopolization of AI by corporations and the potential for AI weaponization represent a significant ethical and practical threat. Corporations may horde AI resources for economic gain, creating barriers to AGI’s democratization. Additionally, AI systems could be weaponized in cyber warfare or used to control populations.
Impact: This can lead to a situation where AGI serves corporate or military interests over public good, causing further resistance and distrust from the general population.
Why Medium-High: The likelihood of corporate control over AI systems is high, and this could lead to significant backlash if AGI is seen as serving private interests rather than society as a whole.
Rank Threat Threat Level Impact Summary
1 Human-Centered Resistance (Special Interests, DEI) 9/10 Resistance from powerful special interest groups may misalign or delay AGI.
2 Political Manipulation and Special Interests 8.5/10 Politicians and special interests may resist AGI due to threats to their power.
3 Climate Change 8/10 Resource scarcity and infrastructure instability could indirectly derail AGI.
4 Ethical Misalignment and Public Backlash 7.5/10 Public fear of AI could lead to backlash and halt AGI progress.
5 Technological Infrastructure Failures 7/10 Energy shortages or cybersecurity breaches could disrupt AGI systems.
6 Corporate Monopolization and AI Weaponization 6.5/10 AGI could be monopolized or weaponized, leading to distrust and resistance.
7 Technological Bottlenecks 6/10 Hardware limitations and scalability could slow AGI development.
8 Regulatory and Legal Barriers 5.5/10 Over-regulation could limit AGI’s ability to be fully implemented.
Profound Observation:
The most significant threats to AGI are human-driven—specifically the resistance from entrenched special interest groups, DEI-focused policies, and political manipulation. While climate change and technological failures pose real risks, the biggest obstacles to AGI will come from resistance within the very systems it is meant to optimize. These systems are deeply intertwined with human power dynamics, and AGI’s meritocratic nature threatens the existing social contracts built around equity, inclusion, and established power structures.
The paradox is that while AGI offers efficiency and progress, its biggest challenge may come from those who feel threatened by the changes it brings, particularly if they benefit from the current systems of inefficiency or entrenched privilege.
AGI will be here when white collar workers are completely 100% gone
Ответитьscam altman
ОтветитьI like lex
ОтветитьAn AGI shouldn't have any restrictions in terms of censorship... Chat GPT would be so nice if it wouldn't censor that much... And also wouldn't be so damn expensive, even when you pay for it, you are still restricted...
ОтветитьDo you really think the version me and you can access on the web is the whole thing? May be It's a watered down version of the actual GPT
ОтветитьAGI will remain two years away for the next 50 years. While LLMs excel at mimicking human-like reasoning by processing large amounts of data, they lack deeper, context-aware judgment or real-world experience. They rely on patterns rather than true understanding or abstract thinking like humans. Because LLMs have access to vast amounts of data, they can generate responses that mimic nuanced understanding, even if that understanding is purely statistical. This “guessing” is sophisticated enough to produce human-like interactions and reasoning patterns—but without actual awareness, logic, or intent. Achieving AGI will likely require entirely new architectures, integrating diverse forms of intelligence that extend well beyond language. While LLMs could contribute components to a broader AGI system, a pure language model alone probably won’t reach AGI.
Here’s a list of reasons why some AI professionals might believe AGI is just a couple of years away:
Attracting Investment: Hype around AGI draws funding and media attention.
Rapid AI Progress: Recent breakthroughs make AGI feel within reach.
Exponential Scaling: Belief that larger models will naturally lead to AGI.
Influence of Optimists: Thought leaders set a trend of short AGI timelines.
A Loose Definition of AGI: Broad interpretation leads to varied AGI expectations.
Competitive Pressure: Fear of being left behind in the AI race.
Public Milestones: High-profile AI achievements fuel AGI anticipation.
Optimism Bias: Tech enthusiasm creates an overly positive outlook.
Underestimating Complexity: Misinterpretation of AGI as a linear progression.
Strategic Projection: Claiming AGI is near to position as a tech leader.
Gpt 4 is Shit! It's nowhere near AGI
Ответить