never said that.
I don’t understand directly, but im hoping he’s about to bring undeniable proof the large action model is just some scripts customised for each service (eg door dash)
This is the demo, where the initial load time (a browser opening on a virtual server) takes longer than it would to just pull out your phone and order
https://x.com/edzitron/status/1785411910336487437?s=46
I think they were asking what is the meme supposed to mean.
I have no idea either, heard the line a few times though
This demo is so hilarious, millions of dollars later:
“I’m hungry, get me some McDonald’s”
fails
Does anyone else think the OpenAI safeguarding quits are because the company realised the positions are unnecessary?
OpenAI realised they are building enterprise products, tailored to it and realised this AGI hype has no basis, so rightfully starved the team who’s only possible useful purpose is presenting misuse of ChatGPT of compute
Gemini seems better than Perplexity with that one:
Gemini:
Perplexity:
Multi-million dollar pitch: Aggregator for AI models, called ‘ AIgregator™’
The thing I don’t get is, what are they making safe?
OpenAI won’t be building an AGI anytime soon, it’s complete Rabbit style hype that will not materialise in the next decade
Am I not wrong in saying that an AGI would teach itself, not be fed petabytes of training data? At the moment, OpenAIs models are built on just predicting what’s next, based on data scraped from the internet which they’re quickly running out of
The only possible reason you’d need a safety team is to stop is producing content it shouldn’t when asked too… I think that sometimes, people quitting is a decision made to cause drama, make people thing certain things are closer than they thought - but we have nothing that indicates this
Rabbit founder compares the R1 to a flash light and says you are submitting your source code to the app stores when you publish your app (completely false, you submit a compiled obfuscated version)
In this tweet as researcher accesses the servers that the “LAM” runs on, and finds that it’s just a precoded browser automation script
Indeed. Warren Buffett has noted we’re in the new arms race with AI such as with Nuclear bombs when they were the new kid on the block. And I, personally, don’t think it’s hyperbole to suggest it either.
We as a species are not really prepared for it, reading “The coming wave” by one of the founders of DeepMind who now works with Microsoft suggests as much, plus what needs to be done to ensure proper governance and controls.
Which is to say cute as a little digital rabbit thing is, we are all at risk from AI right now. I’ll still use it, I’m not that TinHat but it does concern me what’ll happen when a state starts to really use AI in sideline warfare.
Ofcourse I don’t really know enough to say for sure, but from my basic understanding an AGI would be a completely different architecture to what we have currently, it should be able to match or exceed human levels of learning, and ofcourse intelligence
Though it seems what we have now, is actually just based on this theory from the 1990’s: Transformer (deep learning architecture) - Wikipedia
Let me give an example, a 2-3 year old toddler can quickly learn what a car is, but an AI model needs to be trained on thousands and thousands of images and it will still mess up, an example of this was Rabbit’s Vision model (which is just OpenAI’s product) calling a plane in the sky a whale
A relatively young kid is capable of thinking, that can’t be a whale in the sky, those go in the sea and vice versa with a plane. But the AI thinks that plane looks more like a whale, because it matches up better to more of their training data. The current architecture cannot take into account the context of the image properly. It just doesn’t seem to work in the same way as human intelligence.
My two cents anyway
Aid workers get blown up according to some sources.
It has already started:
If the OpenAI safety members were quitting because they thought the world was actually going to end, some money wouldn’t be important to them… how does that make sense?
They are upset because their unnecessary needs and paranoid fantasies are not being pampered too
An interesting example. But a 2 year old child has in fact already generally been exposed to thousands of examples of cars. Babies start to make sense of the world from birth, it takes time to process different shapes and colours into objects but by two they are starting to get there.
That’s only to ruminate on the way children process by the way, not to say that AI is anywhere near the processing or data storage of a human brain. You are right it’s different and AI is a long way off now.
What I think will also be very different if/when an AGI ever exists is that it won’t be subject to the neurochemisty of a human, the chemical imbalances and parts of our brain that come from our evolution that give us things like a sex drive or the flight / fight response. Exactly what motivates its thinking and actions, I think that will depend on who codes it and what for, but its motivation is unlikely to be anything like as complex and self-contradictory as a humans.
Likely something we’ll see in our lifetimes anyway, so that’s exciting.
I have to say this is extremely worrying behaviour for the CEO of a company. Actually, call it what it is, it’s disgusting behaviour full of bare faced lies, if the video is even half true (which I suspect it is).
It’s odd that these people find VC money so easy to come by when so many companies are closing because they can no longer access any.
Yes, I just saw it myself. No surprises and repeats much of what was pointed out earlier in this thread, on twitter and on the rabbit discord (followed by many timeouts or bans of course)
Nothing really surprising at all, that’s the guy who owns it, his history was pointed out months ago. Just wasn’t on a place where he couldn’t delete, ignore or claim lies but coffeezilla has to much credibility to do that with
Will be interesting to see what happens next
It’s not just worrying, you can see EXACTLY what’s coming next
He promised his NFT buyers unlimited lifetime access to an AI that costs money every time you use it (and then he SCAMMED them by every definition of the word)
He promised Rabbit buyers access to an unlimited lifetime AI that costs money every time you use it (and we are here now)
From my limited technical view of Rabbit it would appear the following is how it works:
- The device is an Android device, running an android app that communicates with Rabbits cloud and LAM
- Rabbit claimed the app needed special hardware, this was proven to be a lie when it was shown working on a pixel
- The Rabbit relies on OpenAI for seemingly everything other than their LAM, maybe something else for TTS?
But when the LAM has been shown to be pre written code, that still fails in the demos, what’s new here?
How is this worth more tens of millions?
They made a bare bones app, a bunch of lies (the LAM cannot function in the modern web of captchas even if it was AI powered), spent some money at teenage engineering and called it revolutionary?
And this is before we get to the obvious scam to come pointed out above?
What’s the gist of this without having to watch the video? I can’t right now.