In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.
I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,
Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)
Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.
My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.
Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.
The only difference is the same hucksters are trying to sell the notion that LLMs are or will become AGI through some sort of magic trick or with just one more input.
I don’t know. They typically read entirely differently to me, in the sense that what I would expect to see after clicking the link is different.
I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.
From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.
It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.
So long as half of people are employed or in business, these people will insist that it's not AGI yet.
Until AI can fully replace you in your job, it's going to continue to feel like a tool.
Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.
When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.
I still am not at all convinced I will see this within the next few decades I probably have left.
The military would pay 1000x what a household would for the same capability, and they are nowhere near the ability to do that. Which should tell you all you need to know.
Im a big AI/ML enthusiast (published one paper!) and was always flabbergasted to see scientists go off the typical provable/ testable lane and venture into philosophical and emotional territories
In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.
I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,
https://www.youtube.com/watch?v=K18Gmp2oXIM&t=3s
Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)
Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.
Humans are the best/only example of General Intelligence we have.
> simp-maxxing
Might want to write this out in full lol I thought this in particular was going to be a much more entertaining point.
To be fair, it is spelled with a single 'x' in the paper.
My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.
Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.
The only difference is the same hucksters are trying to sell the notion that LLMs are or will become AGI through some sort of magic trick or with just one more input.
“Strong AI” is the traditional term to compare with “Weak AI.”
Per my view, it fulfills the following criteria:
1) Few-shot to zero-shot training for achieving a useful ability on a given new problem.
2) Self-determining optimal paths to fine-tuning at inference time based on minimal instructions or examples.
3) Having the capacity to self-correct, maybe by building or confirming heuristics.
All of these concern an intern, for example, who is given a new, unseen task and can figure out the rest without handholding.
A term in search of a definition, clearly.
Please fix the title in HN to match the actual paper's superior title: "What the F*ck Is Artificial General Intelligence?"
We don't have an issue with profanity on HN but we do take out clickbait.
Replace it with “what the cuss”?
The word 'fuck' isn't the issue. The issue is that "What the fuck is AGI", as a title, doesn't add anything besides sensationalism to "What is AGI".
I don’t know. They typically read entirely differently to me, in the sense that what I would expect to see after clicking the link is different.
I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.
It communicates that the paper will probably be a lot less "stuffy" than the typical fancy science PDF
> "It communicates that the paper will probably be a lot less "stuffy" than the typical fancy science PDF"
You pose an excellent point... I tend to agree.
From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.
This feels like we’re approaching consensus. https://news.ycombinator.com/item?id=45418763
It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.
So long as half of people are employed or in business, these people will insist that it's not AGI yet.
Until AI can fully replace you in your job, it's going to continue to feel like a tool.
Robotics are also a big one.
Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.
When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.
I still am not at all convinced I will see this within the next few decades I probably have left.
Without denigrating the importance of robotics at all (it is important), I don’t see the connection.
The military would pay 1000x what a household would for the same capability, and they are nowhere near the ability to do that. Which should tell you all you need to know.
I wonder if all the grad students that struggle to find jobs now and all the cheap workers in India who were laid off are "feeling the AGI" then.
[flagged]
Please don't fulminate. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
Stuart Russell said AGI is coming and that we will get 45 trillion dollars from them.
That's what I'm waiting for.
(He didn't specify when or how the money will get here, but I'm betting that I'll get my fair share.)
I (and I’m being serious) assumed AGI would break into the world’s financial institutions and steal the 45 trillion.
Hyperinflation?
[flagged]
"Please don't sneer, including at the rest of the community." It's reliably a marker of bad comments and worse threads.
https://news.ycombinator.com/newsguidelines.html
p.s. HN is pretty evenly divided on AI, and if one side has the advantage, it's probably the anti.
Im a big AI/ML enthusiast (published one paper!) and was always flabbergasted to see scientists go off the typical provable/ testable lane and venture into philosophical and emotional territories
That's funny. I see half of everyone on HN being critical of AI, often unfairly so, but we only ever notice the people we disagree with.
I'm guilty of this as well, otherwise I wouldn't be writing this.
Which is weird given that AI critique usually gets down voted while the frontpage is full of "look what this new model can do" posts every day.
I think most here see AI as a scam and a bubble, but the pro-AI wing has a lot of accounts.
I mean, who has an incentive? Those who want to keep selling.
It isn’t a dichotomy. It is possible for AI to be useful, not a scam, yet also overhyped by people who do not understand it.
It would mean actually reasoning, not just applying stats to look like reasoning.
What do you mean by “just applying stats”?