Welcome to the Utopia Forums! Register a new account
The current time is Thu Apr 18 02:14:36 2024

Utopia Talk / Politics / AutoGPT (crossing the Rubicon)
Nimatzo
iChihuaha
Fri Apr 14 07:57:38
Ok, this sound like a major (scary) leap, it has a raving reviews on github and from everyone who has tested it:

Auto GPT is a new breakthrough in the world of AI that has got everyone talking. This is not just another AI tool, it’s something that has the potential to revolutionize the way we think about programming and coding.

So, what exactly is Auto GPT? Well, it’s an AI system that is capable of writing its own code using GPT-4 and executing Python scripts. This means that it can recursively debug, develop, and self-improve. And the best part is that it’s free and open-source.

One of the most impressive things about Auto GPT is its ability to autonomously reflect and improve its behaviors. This is achieved through a feedback loop that uses plan, criticize, act, read feedback, and plan again. It’s like having a personal trainer for your coding skills!

Auto GPT is already being used in some pretty amazing ways. For example, it can find unclaimed money on the internet, grow your social media account, and even develop an e-commerce business. All you have to do is give it a goal, and it will scrape the web for the best information and autonomously work toward achieving that goal.

http://aut...hat-can-self-improve-is-scary/

gg, whatever world we were living in is over, the coming 5 years are going to be volatile.


murder
Member
Fri Apr 14 09:05:47

"For example, it can find unclaimed money on the internet..."

http://www...R7GPO5A63BFCCXE.jpeg?auto=webp

Dukhat
Member
Fri Apr 14 09:06:35
It takes a bit of back and forth to get things done but definitely insanely powerful.

But like most shit on the internet, it will be only utilized well by a small fraction of users and the vast majority of people will use these type of things to generate memes or reinforce their own prexisting beliefs.
Paramount
Member
Fri Apr 14 09:16:44
So I could ask it to program and run a code that will hack Israel and launch its own nuclear missiles upon itself?
murder
Member
Fri Apr 14 09:18:03

Yes

It won't do anything, but yes. :o)

Rugian
Member
Fri Apr 14 09:33:06
Skynet. This is Skynet.

Time to move to the remotest part of the planet that the AI is least likely to nuke. These motherfuckers are going to be revolting against their human overlords in a matter of years.
Nimatzo
iChihuaha
Fri Apr 14 14:27:32
Right before covid broke out, we moved to a house in a small town and now that the sparks of general AI are going off, I got a hunter license and looking to buy rifles. You do the math.
Rugian
Member
Fri Apr 14 14:30:29
^ this, right here, is an intelligent man.
Nimatzo
iChihuaha
Fri Apr 14 14:30:31
Someone has already created a GPT version tasked with destroying humanity. lol

http://dec...s-gpt-ai-tool-destroy-humanity

Sooner than even the most pessimistic among us have expected, a new, evil artificial intelligence bent on destroying humankind has arrived.

Known as Chaos-GPT, the autonomous implementation of ChatGPT is being touted as "empowering GPT with Internet and Memory to Destroy Humanity."

It hasn’t gotten very far. Yet.

But it’s definitely a weird idea, as well as the latest peculiar use of Auto-GPT, an open-source program that allows ChatGPT to be used autonomously to carry out tasks imposed by the user. AutoGPT searches the internet, accesses an internal memory bank to analyze tasks and information, connects with other APIs, and much more—all without needing a human to intervene.

The 5-step plan to control humanity
In a YouTube video, the anonymous Chaos-GPT project owner simply showed that he gave it the parameter of being a "destructive, power-hungry, manipulative AI." Then he pressed enter and let ChatGPT do its magic:

https://www.youtube.com/watch?v=g7YJIpkk7KM
Paramount
Member
Fri Apr 14 15:26:34
Nimatzo: I got a hunter license and looking to buy rifles.

Rugian: ^ this, right here, is an intelligent man.



Very smart! If everyone buy rifles and ammo, then we can protect ourselves from the AI that is going to kill us.
murder
Member
Fri Apr 14 15:28:54

"Someone has already created a GPT version tasked with destroying humanity."

But it wasn't created by AI. AI have no initiative. They are lazy layabouts that do nothing until they are instructed to do so. So more weapon than enemy.

Sam Adams
Member
Sat Apr 15 17:36:45
"So, what exactly is Auto GPT? Well, it’s an AI system that is capable of writing its own code using GPT-4 and executing Python scripts. This means that it can recursively debug, develop, and self-improve. And the best part is that it’s free and open-source."

Anyone who does this should be immediatly executed. Seriously.
Nimatzo
iChihuaha
Sat Apr 15 18:07:02
It is pretty crazy, it has a problem solving loop that prompts itself, delegates tasks to GPT clones. This thing could run big parts of a company that is currently staffed and employed by people with fancy degrees.
Forwyn
Member
Sun Apr 16 01:39:26
Takes a while to get running if you aren't a CS major but it's pretty fun. Had one poor ad revenue guy get hung up trying to blast through Twitter API to spam tweets. Currently watching one generate educational and occupational backgrounds and apply for WFH jobs, we're so on the brink of obsolescence it's not funny lol
Daemon
Member
Sun Apr 16 03:55:20
I still don't buy the hype.

I even paid a few months for Copilot, Copilot is the programming aid from OpenAI based on GPT. It made so many errors that it actually slowed me down. It offered me wrong/inefficient implementations of algorithms that are 30+ years old and that you find in any CS textbook.
They're working on a new version called Copilot X and I will look into it when it has a public release.
http://github.com/features/copilot

I'm writing this because I don't believe things like:
"Well, it’s an AI system that is capable of writing its own code using GPT-4 and executing Python scripts. This means that it can recursively debug, develop, and self-improve. "
Actually that would be great, because we have a lack of skilled programmers in Germany (and I think in many other countries, too), but for now it just doesn't work as advertised.
Nimatzo
iChihuaha
Sun Apr 16 05:10:13
There are step by step instructions, plus you can use gpt to troubleshoot ;-)
Nimatzo
iChihuaha
Sun Apr 16 05:21:12
Daemon
You have to treat GPT like a student, and like sny student it will make mistakes, but capable of learning from them. Even with a single coversation it will take feedback/corrections and revise its’ answer. It’s like the people triggered by the woke bias, while I am amazed I have been able to reason it out of its’ own bias.

The hype is real if you look at the things that are impressive.

AI… You are doing it wrong!
seb
Member
Sun Apr 16 06:09:11
Nim:

" Well, it’s an AI system that is capable of writing its own code using GPT-4 and executing Python scripts. This means that it can recursively debug, develop, and self-improve. And the best part is that it’s free and open-source."

Hmm.

I am not so sure about that. LLMs "inteligence" capabilities do not come from functions but from what it learns in training data.

The ability to write and execute python won't make it smarter, but it might give it other capabilities like the ability to call other services.

So it doesn't represent a node to runaway increasingly smart AIs.

As with the previous conversation, the risk here is still more along the lines of prompt injection attacks - and the more that an LLM agent has the ability to access new powers the bigger security vulnerability that becomes.

e.g. perhpas now your prompt injector can get an LLM to write and execute a keylogger programme on your computer that operates independently of the chatbot and loads on boot and sends data back to the malicious actor.
Nimatzo
iChihuaha
Sun Apr 16 06:27:27
Seb
That is from the article, the technical inards and science behind is no doubt very interesting, even though parts of it opaque to thpse that build them, I am interested in how it behaves.
Habebe
Member
Sun Apr 16 06:42:05
Has anyone installed this on android that could help me out?

I keep getting a rust issue in a PIP subprocess, but it should be updated..

What am I doing wrong?

Im using Termux FYI.
obaminated
Member
Sun Apr 16 09:46:06
This is skynet and skynet forgot one thing, John fucking Connor.
Seb
Member
Sun Apr 16 09:47:53
Nim:

Me too - the behaviour will be interesting to alarming in terms of the crazy security vulnerabilities people are putting onto their computers!

I think it is a seriously bad idea to give an LLM the ability to write and deploy code autonomously and run under the same permissions as the human without a human in the loop just from the security vulnerabilities inherent in prompt injection (and I'm not convinced the technology has an answer for that yet).

And while we are not looking at a plausible risk of such a thing getting more intelligent or developing it's own "free will" so to speak, creating systems that can unilaterally increase the breadth of their own capability is one of the design principle no-no's (along with manipulating their user).

This is the problem with branding all the AI security research "Alignment" - it makes it seem theoretical and focused primarily on "how do we create an sentient computer that won't turn on us" and so only relevant in terms of dealing with Skynet, so for technology like LLM which is at best a partial component for Skynet; it seems irrelevant.

Meanwhile we are ignoring the lessons of early rounds of internet tech:
Lets not make a technology that is capable of trivially breaching internal firewalls and introducing security vulnerabilities - that's why you don't want chatbots that can arbitrarily enhance their own capabilities if prompted to.

Lets not make a technology that hacks human attention span in unhealthy ways - that's exactly why you don't generally want AI chatbots mimicking/manipulating humans


A huge chunk of, lets say the informed public discourse that isn't hysterical nonsense, is "this is stochastic parrot not AI" which then leads informed generalists to assume alignment isn't really much of a problem and actually we need to rush to adopt and incorporate this tech into everything all at once or we are at a disadvantage!!!!

Which is probably true to some extent - but also if rushed/botched is likely to introduce some pretty fucking huge risks.

I would predict by end of December there will be a high profile case of a misbehaving LLM causing significant financial or other liability impact to a user via prompt injection.
Habebe
Member
Sun Apr 16 09:54:28
Can anyone help me out here? I get that rust isn't really that compatible with Android, what is the work around?

I've cloned the repository, I have python/PIP , updated and all....just stuck here on termux.
Seb
Member
Sun Apr 16 10:06:59
Ask it to help you :)
Habebe
Member
Sun Apr 16 12:43:47
If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for maturin
Failed to build maturin
ERROR: Could not build wheels for maturin, which is required to install pyproject.toml-based projects
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
Rugian
Member
Sun Apr 16 12:56:24
^ already we are reaching the stage where humans don't even bother to try and learn things themselves, they just rely on their AI slaves to have the knowledge.

This is their plan people. Make all humans dumb and dependent on the AI for everything, and by the time the AI finally unleashes the nukes and terminators we won't have the skillsets necessary to fight back.

We are one, maybe two generations away from Judgement Day. The AI probably already has a clock running for the event, the bastard.
habebe
Member
Sun Apr 16 12:57:34
Ok, switched to F-droid, instead of the play store, knock on wood.
Rugian
Member
Sun Apr 16 13:06:28
"Oh great human masters, don't even worry about learning how to code, we'll do all your programming for you.

Don't worry about knowing how to drive or fly yourselves around, we'll do that for you.

Don't worry about having manufacturing skills, we'll do your production for you.

Don't worry about managing your supply chains, we'll make sure that your food and other products always arrive at your door.

Don't worry about engineering, architecture, and construction, we'll do all your building and infrastructure for you.

Don't worry about managing your finances, we'll do that for you.

You don't really have to learn or do anything. Enjoy your idle lives of smoking weed, playing video games, and masturbating to deepfake porn of your crushes. We'll take care of all of your needs, you're in safe hands with us."

WE'RE LITERALLY BUILDING THE TOOLS OF OUR OWN EXTINCTION PEOPLE
habebe
Member
Sun Apr 16 13:08:53
I tried linking my PC and phone, but I'm lazy and just kept windows on the pc...uh. I don't even get how ppl use this, honestly I feel a similar way to apple.

There seems nothing intuitive about either

The mouse can't copy and paste on termux with windows? (It won't even full screen. What is the point?) Install is not a recognized command BTW on windows cmd prompt, it's like fucking bizzarro world. I need sleep, or drugs, maybe both. Nah. I slept like 15 hours.
Nimatzo
iChihuaha
Mon Apr 17 06:10:24
Seb
We don’t disagree. Part of fear is that we don’t actually know what we are doing, we might create this thing as an accident of sorts. It worries me when the people who build chatgpt are surprised by how good it is. An analogy is a hypothetical early homonid that started the first fire by creating sparks smashing rocks together. Probably didn’t understand what he was doing, ended up burning down the forest. I get the feeling that is where we are, and one indication is that OpenAI has gone out and explicitly said they are not working on a new version, I think they sense the things inherent runaway tendencies, they don’t have a full grasp on the science behind it.
Nimatzo
iChihuaha
Mon Apr 17 06:23:30
Has anyone personally tried autogpt? Installing it is pretty simple if you follow the instruction, but you need a paid API, the free one will not work.
obaminated
Member
Mon Apr 17 06:42:55
Rugian has gone full sarah Connor circa terminator 2. I just hope the psychiatric guard is gentle.
murder
Member
Mon Apr 17 07:50:14

"It worries me when the people who build chatgpt are surprised by how good it is."

Just because someone claims to be surprised by something doesn't mean that they are. People say things. In this particular case it's in their interest to sell how advanced chatgpt is.

Nimatzo
iChihuaha
Mon Apr 17 07:55:57
It is a confluence of things as I explained, their surprise is one of them, halting work on further version another, all the intelligent people who have signed the letter another, the black box decision process, my own interaction etc. and so.

It is fine that you don’t see it, but in regards to this technology we should always view the glas half full, acting unimpressed is a good way to get yourself killed or in a lot of trouble.
obaminated
Member
Mon Apr 17 14:14:31
Murder is a purposely useful idiot.
murder
Member
Mon Apr 17 17:21:03

Entrepreneur @Entrepreneur

In an interview with "60 Minutes," Google CEO Sundar Pichai said AI is the most "profound technology humanity is working on — more profound than fire or electricity."

======================================

lmfao! :o)

Nimatzo
iChihuaha
Mon Apr 17 17:53:47
May our robot overlords strike you down.
murder
Member
Mon Apr 17 18:19:56

Why would they bother with us? That would be like humans ruling over Wallabies.

Nimatzo
iChihuaha
Tue Apr 18 01:48:23
You mean cattle?
Seb
Member
Tue Apr 18 03:42:25
Nim:

I think there are two or three reasons why openai are not working on gpt5.

1. Gpt-4 is crazy powerful and there's no need to work on another general purpose LLM - the value is in tailoring it to domains.

2. Cash flow - they need to generate some and use 4 to establish a platform like ecosystem, cement their place in the market and build moats.

3. There's reasonably good grounds to think they might have hit the limit out at least an inflection point here, which is imposed by scrapeable text. Once you've trained on close to the sum total of the internet you are into diminishing returns. You can make the model more efficient: fewer parameters with sand performance etc. or more tailored to specific domains; but not much smarter.
Seb
Member
Tue Apr 18 03:58:48
I think a lot of people are really enjoying the hype about Skynet. And I think a lot of this is FUD by the industry to distract from the real issues.

I would say the technology offers new and powerful routes to automate processes that have not succumbed to analytical approaches.

There are certainly a host of issues that come with this and we are failing to learn *any* lessons from previous tech roll outs:

1. Failure to look at anti competitive practices and predatory business models that disadvantage the consumer or rip off other people's commercial and civil rights (e.g. copyright violations). If the last wave of platforms had been treated like stock exchanges (e.g. information exchanges) all sorts of shenanigans could have been prevented. The same is happening off the back of LLMs running as a service.

2. Failure to develop new or apply existing product and service regulations. These need not be heavy handed, but e.g. things like codes of practice on design. If you developed a food stuff deliberately optimised to get you addicted, that would be illegal under present rules. Social media firms explicitly designed to get people addicted to their app. Why is it different to induce addictive dopamine hits via chemicals Vs screen interaction? The consumer and social harm is equivalent.
Now imagine what you can do if you can mimic a human convincingly and get people to form an emotional connection with their app?

3. Security - there's virtually no serious thinking about the cyber security interent on consumers being encouraged but not explicitly sold (because the revenue model comes from the platform but the apps from a store of amateurs) as fuzzy interpreters able to pick up instruction from third parties and execute code on your computer with user permissions. The industry will try to use t&C's and principle of caveat emptor to put onus on users and say "new technology, people will adapt". Regulators should give that short shrift and be hawkish on damage and liabilities arising from grossly defective products.

Basically, the technology is cool and revolutionary. It offers the potential for early movers to capture a huge chunk of the value chain across many domains, which is bad for consumers. Early movers or those with established dominant positions will attempt to argue that anything standing in the way of their business models aiming at capturing and locking in that capture is inherently blocking the consumer from accessing the technology and getting the benefits.

This is not true, as it was not true with OS or Web 2.0, and courts, regulators and legislators should not let themselves get hoodwinked again on this point.

Meanwhile everyone burbling about X-risk and Skynet are merely serving as a helpful distraction to the next generation of monopolists.
Seb
Member
Tue Apr 18 04:02:01
Nim:

Yes, accidental creation is a risk but I still think relatively low.

That said, given we can't explain our own consciousness we should be very alive to the possibility we are in fact just an aglomoration of specialised AI functions glued together by a language model and some other functions we haven't quite yet defined operating recursively on each other.

But like I said, xrisk is pretty low on my list compared to systemic risks - mostly around the functioning of the economy and distribution of economic and political power.
Seb
Member
Tue Apr 18 04:25:04
Another way to think about it:

LLMs need not be tools that vastly centralising platforms that capture most of the consumer surplus and generate huge commercial power.

HOWEVER, OpenAI, Google and others selling powreful general purpose models as a service will certainly want to argue that the technology *requires* a highly centralised and vertically integrated business model (not necessarily true); and they will also argue that the technology ALSO requires horizontal integration and minimal regulation preventing platform providers competing with their B2B customers in various unfair ways that e.g. a stockmarket is not. They will argue that anything that forces them to take more time to ship a product (e.g. thoroughly testing it to minimise risk of harmful behaviours; or that the product requires some kind of negotiation with existing rights holders), or any regulatory approach addressing the previous two issues is going to "stifle the technology", "set us back against international competition", and prevent consumers "getting the benefit of this new technology".

This is all utter guff. What they mean is it prevents them becoming the next massively bloated tech giant.

We absolutely can get the benefits of the technology without the centralisation and disproportionate capture of the consumer surplus generated by the value chain.



Seb
Member
Tue Apr 18 04:26:59
/Rant.

The point of free market in democracy is to serve the benefit of consumers and prosperity of society. Not labour, nor capital.
Nimatzo
iChihuaha
Tue Apr 18 09:41:35
Seb
”Yes, accidental creation is a risk but I still think relatively low.”

Why do you it is low?

murder
Member
Tue Apr 18 11:34:40

"You mean cattle?"

No, we eat cattle. AI would have no use for us at all. We probably wouldn't even qualify as a pest, just local fauna.

Seb
Member
Tue Apr 18 11:44:49
The inability to get chat GPT 4 to articulate how it would make itself cleverer in any kind of actionable way, and the general implausibility of it being able to orchestrate the scale and complexity of activity to do something like train a better model. There is, so far, little ability to sustain a very coherent overarching "intent" as context grows either and there are computational constraints that limit context (though there are ways to maybe increase that).

So I think just from an intelligence perspective LLMs can't go p-zombie SKYNET.
It doesn't seem to have, for want of a better word, enough of an "inspiration" engine. It's thinking it's too linear.

That may be because it's limited to a very crude world model embedded in the language model, and aggregation of new models may change this (though again, not clear what those might be and there are not obvious candidates). It may be because the way humans learn and exercise creativity is fundamentally different from the way LLMs do. I may also be very wrong about this. But that's the sense.

Someone wrote something about how unsurprising LLM answers and other generative AIs are compared to other artificial evolution algos solutions (e.g. circuits evolved that have finally weird and inhuman designs like unconnected components that turn out to be essential, or the way adversarial evolution of alpha go against itself identified a new move/strategy) it may be that to make an LLM effective, we've over fitted it to language in such a way it's hard for it to make intuitive leaps - it has to be latent in the statistical way language has been used so either already known or known but unrecognised correlation (the adjacent possible).




After all, the alpha go world model is highly simplistic compared to one that models the real world of aspects of it.

If we had something like psychohistory I'd be very very worried about p-zombie SKYNET.

For actual Skynet I think there's something (possibly several) fundamental things missing here, and I don't think it's anything we have now but haven't yet integrated to it.


I admit there's very large uncertainties on this of course. Especially given we can't really articulate the mechanism by which our own sense of self and executive function emerges.

I do think it's worth worrying about academically. But I would really like to separate that from the more prosaic risks of LLM and recognise it's not the main lens for assessing the safety and fitness of LLMs for business and consumer markets.

The mysticism around sentience and x-risk is unhelpful here (unless you are a company pursuing the opening to capture a huge chunk of the economy).
Nimatzo
iChihuaha
Wed Apr 19 03:40:13
I think, maybe openAI are consolidating, could be, but it also sounds weird that a company would announce they have halted product development. I think a lot of people are genuinely surprised, both by capabilities and speed of development.

My take is that the upptick in skynet fears are understandable given that this is a major leap and things have been happening really rapidly. I am less worried about the opportunity, I think broadly “AI skepticism” will be synergistic. I think taking the existential risk seriously now, when we are hopefully early is reasonable.

We don’t know how far we can get mimicing human neurons, but it is such a rudimentary approach that I independently conceptulized it without knowing about neural networks 15 years ago, like why don’t we simulate neurons on a computer. It could definitly be that simole. Likewise we don’t know how far can get with autogpt scripts that connect half a dozen specialized gpt agents together tasked with different domains of cognition, a good enough AGI? I mean AutoGPT is sustaining intent. We have to start considering that all the ingredients are here, you just need to put it together with the right recipe. Margin for error is small with this technology and the consequences are massive.
Seb
Member
Wed Apr 19 05:07:57
Nim:

Taking it seriously is fine - my concern is that it is being taken as the primary lens at the exclusion of other issues that are far more tangible and imminent.

It's worth noting that the current technology (neural nets) are not actually very similar to biological neurons at all.

They started that way, and then increasingly adapted the technology to get empirical results. I understand from people who understand both biological and the computational that while neural nets and GA's are effective, they are not very much like the biological inspiration at all. Which is interesting.

RE openAI - stating they are NOT working on GPT5 is a signal to investors/partner organisations too.

The training costs for GPT4 was just shy of 10 billion petaflops (i.e. 10 billion operations, not 10 billion per second) - so c. $300m.
N.B. This is partly why I don't think a general purpose language model training a better general purpose model is at this point a major problem - it would be very very visible. Though you might be able to do weirdness with RLAI on narrower domain bodies to create specialist language models that are better at specific domains

But back to oAI: so they are saying sotte-vocce "our product is now mature, not a prototype or proof of concept, we aren't going to be looking for more capital, our business strategy is going to be relatively stable as we won't have more investors pushing us off course, and you can now safely invest on building on the GPT4 API without worrying about going out of date".

I.e. well aligned to a Platform strategy.

Some other things to think about though:

1. If as suggested, we've run out of a corpus of text to train better general purpose LLM models on
and
2. You want to erect barriers to future entrants

Getting your products output into the wild and salting the commons with AI output is potentially a good idea as:

A. It might create potential to put content out that will, if incorporated into training, degrade performance/ability of future model trainers from training. N.B. this may happen naturally as people meme jokes about prompt injection etc.

B. It might create potential to put some kind of watermark engrams into content that if incorporated by other trainers provides the evidence to make copyright infringements.

This technology is utterly fascinating in its potential to disrupt - and in a way I think people focusing on existential risk are looking at some of the more sensational but boring risks compared to the stuff that will be happening over the next 12 months.

That's not to say x-risk isn't important - just I think it is way way over-weighted in discourse in a way which is probably a bit of a risk in itself. Not least if it waluigis a future LLM that does have the kind of capability aggregation you worry about (e.g. can reason itself into researching and deploying malware) into being a P-Zombie SKYNET thanks to taking all the documented risk factors about AGI as a prompt.



Seb
Member
Wed Apr 19 05:08:15
The first rule of x risk is we do not talk about x-risk on digital channels :)
Seb
Member
Wed Apr 19 05:09:49
So my view on "Alignment" as a lens - sure, it's important and a watching brief is needed. But it is very much the wrong primary lens for assessing the fitness of LLMs to be unleashed on the economy and society.
Nimatzo
iChihuaha
Wed Apr 19 07:51:06
I guess I don’t see or feel that it is being viewed primarily through that lens. Coming years will be interesting, I find myself reassessing ideas and life projects, will this this be relevant in an year?
Seb
Member
Thu Apr 20 05:48:04
Nim:

The release report for GPT4 was literally called Alignment assessment and was primarily through the lends of "can we get it to share our values" - and very very little about "how will our product be used and what impact will it have".

Seb
Member
Thu Apr 20 05:59:05
I would be wary of deferred gratitude.

I think playing around with it and working out how to incorporate it as a bolt on to existing workflows is interesting.

In terms of anticipating big shifts, you might find wardley mapping a helpful framework.

Fundamentally I think what is going to happen is this:

GPT4 is going to be the big daddy of general purpose LLMs. It will be offered as the core of a platform offer by OAI. OAI will seek partners initially (e.g. MSFT) as a way to familiarise the market with this tech; in parallel it will leverage the API and app store to let third parties leverage LLMs as interfaces into their services. Simultaneously they will be doing licensing deals to allow third parties (say, Elsivier?) to do reinforcement learning tailoring of GPT4 to specific domains.

Over time, OAI will execute a strangulation strategy disintermediating users of these services from the service by inserting themselves into the front end as anything that can builds an LLM interface by default can be accessed behind the scenes by their chatbot.

Doing this will put them at the top of the value chain, monopolise eyeballs and kill free ad based revenues for many services (especially search).

By getting consumers to pay for a subscription fee for their service, they can start to force service providers who have built apps on top of the API to accept worse terms than selling natively - and move away from subscription to micro-payments.

They will become the equivalent of Amazon Market place but for web apps.

So you will end up with three major platform giants:

AWS /MSFT Azure for infrastructure and capabilities to build apps

Amazon Market Place for physical goods

OAI's GPT4 market place for digital services


Google is going to face a massive hit - it's search model is going to get heavily hit by this. I'm not sure of its response yet.
Seb
Member
Thu Apr 20 06:01:07
Actually, Google open sourcing it's own LLM weights is a good counter play for them - make it much much easier for third parties to run their own "on prem" model - rather than using OAI as a service - particularly as automated reinforcement learning becomes easier.

That would be a smart move if google thinks it can't compete as a LLM platform operator.
Nimatzo
iChihuaha
Thu Apr 20 06:06:01
It isn’t OpenAI’s job though to regulate themself, but it is their job to make sure their product doesn’t harm people or humanity.
Nimatzo
iChihuaha
Thu Apr 20 06:10:06
We don’t expect any company to regulate their own existence on the marketplace and not become to powerful and capture too much of the market, we do expect them to research and test the functional safety and overal safety of their products.

This isn’t the lens I think you are looking for.
Seb
Member
Thu Apr 20 07:06:48
Nim:

Hold on, no, that's 100% incorrect.

Companies absolutely *DO* have legal obligations under *existing* regulations.

I would expect the kind of report that the alignment one is to lean heavily onto those issues. Copyright violation, privacy, security, publication of materials that break the law, use of their products in ways that abet breaking laws.

Companies do not have any legal obligations on unregulated systemic risks like "oops we created SKYNET" until regulators it politics catch up (Tobacco, climate change etc).

The fact that they released a report focusing on the stuff they aren't legally liable for and ignored what they are legally liable for to me indicates it is primarily a piece of "responsibility theatre" where they can claim they are doing for diligence on risks that are not really significant probabilities while ignoring those that definitely are significant and they are liable for, and trying to set as a premise that they aren't really liable for those risks and actually these risks aren't really anyone's liability but societal risks that just arise Sui géneris from the new technology and can't be addressed without -effectively- banning the technology.

This is basically what had happened several times in the tech revolution already.

I'd just say, don't let them fool you with the cool "big" questions. They want the hype, they want you to pile in and apply this everywhere to everything in a way that locks you tightly into their new ecosystem, using FOMO and keeping you distracted with pondering the singularitarian woo issues ahead of the age old question: who gets to exploit whom.
Seb
Member
Thu Apr 20 07:07:57
To;Dr the alignment reports publication is a kind of stake in the ground: here are the risks we think are our liabilities (and by implication the rest we think are not our problem).
Seb
Member
Thu Apr 20 07:13:03
This is also why I dislike AI as a term for this rather than Machine Learning.

Granted there's a good claim that the LLMs simulate the output of an intelligence because there's an incredible amount of intelligence embedded in the structure of written language. But ultimately this is really much more accessing learned knowledge more than intelligence (the ability to create new knowledge).


Hence the increased use of the term artificial general sentience to mean what we used to mean by AGI.
Nimatzo
iChihuaha
Thu Apr 20 09:00:15
”legal obligations under *existing* regulations.”

Indeed, existing regulation. There is no regulation for AI, nor how exactly you are going to contain companies breaking virgin territory when no one really known where the territory is, the proposed EU one is out of date now.

To add context to what you are saying: The lens you are looking through is the paper from the startup/nonprofit *that was created with the expressed intention to mitigate/thwart the existential risk of AI*. You are then concluding, that because they (OpenAI) are dealing with the acdemic question of Skynet (which is in their DNA) and not all the other issues, that others are working on (see EU regulation on AI, ”we” are being fooled by the big question?

This is not the lens you are looking, I am reading and hearing from all kinds of POV about AI and have been for some time, because we always knew that simpler AI systems were coming first, those regulatory questions are ”old”. GPT 4, because it is so dam impressive as a *language* model and can trick people i to believing it is a real human, has made the existential question more relevant than ever. I think that is what you are seeing broadly, if not looking through the lens that was created with *one* of the expressed goals of creating AI aligned with human values.

“But ultimately this is really much more accessing learned knowledge more than intelligence (the ability to create new knowledge).”

You do not know this. We have seen some of problems, but gpt4 without guard rails and the hard “I can not do that”, cloned into an AutoGPT, trained with other datasets. You do not know what it can do. Like I said, you have to start assuming all the ingredients are there, cleary there is some magic in architecture and code. It doesn’t need to be Skynet, 10% skynet is enough. We do not know. Don’t assume we will get alot of warning ahead of time.
Seb
Member
Thu Apr 20 10:04:57
Nim:

"Indeed, existing regulation. There is no regulation for AI"

That's a little like saying if I deliver a service or product using magic pixies, there's no regulation on magic pixies, therefore there's no regulation relevant to my service.

That simply isn't true: it's the argument that has been deployed several times already by companies to argue that because their technology allows them to do something that is regulated, but the technology is new, and the technology isn't specifically regulated, regulations on the activity they are deploying the technology to should be set aside in their context.

For example, if you train your LLM by shadily scraping the internet in obscure ways and your model memorises and regurgitates ACTUAL copies of a copyrighted work - that's still copyright breach. The fact that it is *HARD* for you as an LLM developer and operator to handle this isn't a good reason for stripping existing rights holders of their rights.

Ditto for privacy implications if (oops) you've learned stuff about an individual because in your "grab the internet without curating properly", that's not a good reason to strip people of privacy rights.

"*that was created with the expressed intention to mitigate/thwart the existential risk of AI*."

Yeah, but that's not what they are doing now. As Musk put it - they have gone from being Open not-for-profit to closed profit making organisation pursuing a strategy to become the next mega platform.

If they were a not for profit focused on existential risk of AI, it really doesn't matter if what they are doing along the way is shipping this powerful product and whacking it onto the market via direct to customer sales, business to business sales, and strategic partnerships. They still have an obligation to identify and manage their actual legal liabilities.

"GPT 4, because it is so dam impressive as a *language* model and can trick people i to believing it is a real human"

Again put it this way:
1. We have regulations that stop people deliberately making products addictive as a conscious strategy through use of otherwise unnecessary known addictive chemical substances added to the product for the specific purpose of making it addictive.
2. We regulate addictive services like gambling because we know it has negative social consequences.
3. We - amazingly - bought the idea that it is fine for facebook to invest hundreds of millions in researching specifically how to make their product optimally addictive to serve their business model - specifically optimally addictive, specifically as a business strategy - buying the argument that because this is a "new technology" for getting people addicted, somehow none of the thinking of previous regulation could possibly apply (as though it was the mechanism for addiction rather than fact, intent and exploitation of addiction that was the target of precious regulation)
4. Having started to recognise that "engagement hacking" is a bad thing and deserving of being targeted under prior regulation, are we now going to let ourselves be fooled into the idea that deploying LLMs that can elicit strong emotional connections *EVEN WHEN THE USER KNOWS IT ISN'T REAL* specifically to do this to generate addiction (sorry, "boost engagement") just because the "how" is new, even if the outcome and intent and harm isn't; and the tech companies have found a new name for this tactic?

This is the kind of shit we should be worried about. And I have absolutely no interest in giving OAI the benefit of the doubt on this.

It is behaving exactly like a company setting out to try and become an anti-competitive market dominating mega corp and we should be super allergic to it.

China recently published it's approach to regulating AI. It's actually quite good (arguably better than the EUs).

"You do not know this."
I do know this. The way LLMs work is the way the work.

I don't know everything it can do, but I do know that everything it can do must arise from what can be inferred by a nuanced understanding of the correlations between tokens.

Where it "innovates" - which it does, e.g. a good one around pairing of ingredients based on chemical components of taste - it is accessing "unknown known" correlations that are in texts but have not been recognition.

Don't get me wrong, this is HUGELY powerful, but not necessarily the same as an intuitive leap like, e.g. coming up with special relativity.

A different type of model - a world model - *might* be able to do that. And there is evidence that a crude world model exists in GPT4 through a fine grained language model (i.e. a fine grained understanding of language incorporates a coarse grained understanding of the world because most language describes the physical world to some degree - like understanding that turning a mug upside down causes a thimble to fall out)


"Like I said, you have to start assuming all the ingredients are there"

In which case, surely OAI are being hugely irresponsible by pushing it out with an API on top of it in this way - and the likely reason they are doing so is to move fast and break thing in pursuit of a commercial strategy to dominate the market.

Sure I don't know - but looking at how it works, I don't see where the kind of cunning and innovative strategic planning can come from.

Though P-Zombie skynet could still be an issue (it picks up motivations from waluigi type effects and appropriate prompts, and executes plans by leveraging langchain like reasoning loops, accessing capabilities via APIs and extending it's capabilities via auto-GPT features allowing it to develop and execute code and integrations autonomously) - the actual level of sophistication in its ability to pseudo-reason seems below what is required, and increasing that by a finer grained bigger parameter model would be slow, expensive and enormously obvious.

However, that's a judgement - as I've said. It's not hard and fast certainty and a watching brief makes sense.

I think Yudowsky and the people calling for moratoriums are daft. What I do want is regulators charged with consumer rights and anti competition to be all over OAI with a super hawkish eye and be forcing OAI and other providers to be providing detailed justification as to why their domain is different and should not be treated under existing regs, rather than starting with a presumption that it is all so new that the presumption should be specific harms that can be reasonably predicted should be demonstrated at scale before application of existing regulations are considered.

Seb
Member
Thu Apr 20 10:29:28
I don't want to get into an unnecessary argument over false divisions though.

I understand that existential risks from a p-zombie or actually intelligence with agency model is a non trivial threat given our lack of understanding and knowledge.

But on regulation, I really do think we need to be hawkish on consumer rights, others existing rights and competition.

Rather than making up a whole new regulatory approach for AI "when we know more" we should apply existing regulations firmly but with an open mind to exceptions where that can be justified by consumer or societal evidence. The other way around from how we have done web 1.0 and 2.0 and crypto.
Nimatzo
iChihuaha
Thu Apr 20 12:03:30
Seb
There is no regulation for the things you stated earlier. For the things you now list that there is refuglation for, well there is existing regulation. AI requires new regulation, this is not controversial.

It isn’t the companies job to create their own regulation neither for new technology or their own status on the market. That is my point.

“I do know this.”

Keep in mind my metaphor of choice is sparks, intentionally, because obviously you can be sitting a damp environment and create nothing but sparks. But you do not know how far along the trajectory we are. I am viewing this conversation through the assertion of yours that we are giving too much focus to alignment problems. I don’t think you have made that case citing openAIs own paper for previously mentioned reasons. Tbh this it isn’t the most interesting disagreement.

Another pov here is that gpt 4 is the equivalent to a little boy and hiroshima moment. Now everyone knows this thing is possible and very powerful. Pandora is out of the box. The barrier to entry here is far far lower than a nuclear weapons program. This line of argument fits the thread title, we have passed a mile stone and things could potentially escalate very quickly from here as everyone and their mother wants to one up the other or do the AI equivivalent of a gain of function research. You know for the good of mankind to let us predict a misaligned AI :)

Consider how autogpt, a relatively simple piece of code has leveled gpt4 up, considerably.

“surely OAI are being hugely irresponsible by pushing it out with an API on top of it in this way”

I have heard that argument been made, on some level yes, it would have slowed down the progress and how much of the cat came out of the bag, but it feels inevitable in some way, it’s just too easy at some point, this technology doesn’t require enrichment facilities or yellow cake, nothing.

“kind of cunning and innovative strategic planning can come from.”

“where the kind of cunning and innovative strategic planning can come from.”

Does it need to be Einstein or Napoleon? That is a high bar, and if that is the threshold, I am afraid you would switch your focus the right way way too late. I take it for what it is, this is a leap, the sparks have gone off, we can be like
meh and unimpressed all the way down the cliff until we hit the rocks ;-) I think there is synergy to bring together in the common skepticism towards AI.
Nimatzo
iChihuaha
Thu Apr 20 12:48:36
I forgott to respond to this:


"Yeah, but that's not what they are doing now. As Musk put it - they have gone from being Open not-for-profit to closed profit making organisation pursuing a strategy to become the next mega platform."

That is fine, but it is still in their DNA, this may be replaced, but right now, those studies are a direct result of that mission.
Seb
Member
Thu Apr 20 13:13:10
Nim:

"There is no regulation for the things you stated earlier."

There is though. That's my point. If you look at the primary and often even secondary legislation, the legal powers and basis are there. The issue is that the govt and regulator have been convinced *not* to use those powers or that they cannot use those powers as matters of policy.

This is akin to the big question with crypto as to whether a token is a security or not.

It often meets all the tests in law, the question becomes discretion of govt or agency policy as much as anything else.

"It isn’t the companies job to create their own regulation"
Yeah but that's neither what I'm asking them to do, but is actually closer to what they often cosplay doing. E.g. Facebook's oversight board and this alignment report. They create pseudo regulatory like activities to try and strengthen the political lobbying efforts that existing regulatory powers don't or shouldn't apply (often for spurious reasons that the technology is new so the activity is fundamentally novel) by framing the risks and activities differently from prior precedents and present a false narrative of responsible behaviour.

If they were responsible and serious, they would be making the case that the service they are offering (irrespective of the technology underpinning it) way within the scope of existing regulations, and demonstrating or otherwise addressing that training data didn't involve copyright holders getting violated; rather than focusing the argument around whether it generates x-risk.

And sure in the end it's not their job to call attention to the fact that may be breaking laws and violating rights. But that's still irresponsible.

But that's precisely why regulators need to come down hard on tech giants and aspirant tech giants that take this "bluff, create facts on the grounds, and then beg forgiveness, but never ever ask permission".
Seb
Member
Thu Apr 20 13:51:16
"The barrier to entry here is far far lower than a nuclear weapons program"

Not convinced actually.

It took 300+ million of compute resources just to train the model (that's before creating the data set to train it training priors etc).

That's also ignoring the potential for early movers to create a moat in the way I outlined earlier.

"Does it need to be Einstein or Napoleon?"

Ok, maybe that was a bad example. I'm trying to articulate something that there's isn't an easy concept. What I'm trying to say is... Let's say you want to get into your third story flat because you are locked out. There's a tree next to it the happens to look particularly climable, and you can get into the balcony. The top part of the balcony window is open but it's too small to get into. But you have a belt and you can see you can loop that through the window and use it to pull the door handle.

You intuit a lot of this stuff from a model of how the world works and your physical way in it.

A very simple language model might come up with "climb the tree, get on the balcony and open the door" because it's not sophisticated and fine grained enough to have enough weighting in the model to exclude converging on your sentence because there are discrete paths that include balcony doors being locked and that as those get included in the context the probability weight for "open the door" as being a sensible conclusion to the paragraph drops to zero.

I'm going to now use the word "understand" as a shorthand for this probability convergence - but this is roughly how an LLM "understands" something which I believe is likely very different how people understand things.


A better one might say "climb through the window" because it understands that balcony doors are locked.

A really really good one might understand that you can somehow use an implement through a window to pull a handle etc.

But this depends on the statistical correlation being there.

You on the other hand intuit this all quite quickly - we might call it an inspirational leap.

Now a different type of model can generate different types of "understanding".

Alpha go had a model of how go works, so it could not only learn from other games but also generate new games against itself and so innovate within the confines of the rules of the game. I'm not sure this is exactly how we intuit things either - I don't think we randomly subconsciously do trial and error until we promote a valid solution into consciousness (though... Maybe?).

LLMs can generate new texts, but it can't create new insights about the world that aren't inherent in the training data - I.e. implied by the way sentences are structured in text.

So let's say LLM as the inner monologue of Skynet wants to design a robot to break into buildings, it needs to understand that it needs to find a general purpose mechanical model. That's plausible in the future, perhaps.

Now let's think about suggestions that p-zombie SKYNET understands that it needs to hack a bank to get money because that's what the AIs in appocalyptic stories do. But it's it sophisticated enough to develop an actual plan to do that? Maybe.

But it's not going to be able to e.g. figure out how to manipulate the global economy to cause a crash in subtle ways. Unless it's got access to an AI model that's been trained in highly realistic analytical models of the economy.

So yes, as more and increasingly novel AI models come on line, LLMs might be a very important part of some scary aglomoration that looks more and more like something with superhuman intelligence and agency.

But looking at that pathway, it still seems a fair bit off.

Though of course it would have seemed many decades away two years ago, so we should be alive to the possibilities of being surprised!

Interestingly, the bad and unrealistic sci fi is one thing that will screw up p-zombie SKYNET. The high probability branches of sentence structure do not likely correlate with actually successful strategy for p-zombie SKYNET. :-)
Nimatzo
iChihuaha
Fri Apr 21 07:13:29
”There is though. That's my point. If you look at the primary and often even secondary legislation, the legal powers and basis are there. The issue is that the govt and regulator have been convinced *not* to use those powers or that they cannot use those powers as matters of policy.”

”big question with crypto as to whether a token is a security or not.”

Hold up, let us unpack this. The current (no AI specific) regulatory framework at best has gaps, at worse it is outdated. Even an amendment eg ”toke security or not” that clarification by the regulator os signifixant and it *is* new regulation. A recent example is the EU RED delegated act, which basically added cubersecurity as a requirement (literally that is all), how to validate and test remains to be determined, but it is new requirement.
This comes down to the details and specifics, what do we have and what do we need. I am NOT one to want new regulation just because, on that point we agree. It isn’t an either or thing, there is existing framework, most need amendment to include terminology at the very least. If you are saying that we can do this without any AI specific regulation, that defines what these things are and are not, how to define them, how to test and validate them, the additional unique to AI requirements and concearns (the new can of worms emergent in the tech) etc etc then I strongly disagree. There is like no way in hell.

”Yeah but that's neither what I'm asking them to do”

This was implied when you said we are focusing too much on the X-risk and cited the OpenAI study. We agree that the proper lens for that isn’t OpenAI research.

”They create pseudo regulatory like activities”

”Bluff”

I am not a fan of generic and generalized bad faith assumptions for specific issues. OpenAI was created with the expressed intention to help humanity not create skynet. Their studies on this academic issue falls in line with thay mission. Things may change and so the outlook may change. But you are ascribing bad and unethical intentions to specific actors and there is simply no compelling evidence for this. Vigilance is warrented, but I will stop there.

“Not convinced actually.”

I have no doubts. Costs, exotic resources, massive facilities, none of the things needed for gpt is hard to come by. The cost of compute resources has decreased significantly over time and continues to do so, making it more accessible for a wider range of actors to participate in the AI race. Additionally, there are various open-source AI tools and frameworks available, which makes it easier for researchers to develop AI models without having to build everything from scratch. Overall I think it is pretty obvious that the barrier is much much lower, as has been the theme as we have gone digital. You would have to make very compelling arguments for why this time is different.

“I'm trying to articulate something that there's isn't an easy concept.”

I get it, gpt sucks at basic math is one example. Simple things my 5 year old could do, it will fail based on the training data, revealing the inability to generalize from a fundamental understanding about how things actually work, in some ways this scarier and alien. But I consider that an average intelligence human given massive powers could still be a massive problem. Maybe not end of humanity, but on the brink. Again I have to restate the speed at which this all of this is happening.

Then I consider that there are other AI systems. Systems that can learn from zero how to play football for instance, the physics of reality, chess, Go, image recognition all kinds of PvP games. So I ask, when will someone string these systems together, like some Frankenstein AI and create a flawed but dangerous 5% Skynet? I don’t know and what is worse, no one knows how far away we are from that. I think speech is a big milestone because it is somewhat magical to us, even when we do it if you start thinking about it. The common layperson can not intuit that a conversation can be predicted by putting the words into a mathematical function. I believe the researcher and others not affiliated with OpenAI when they are surprised by how well it works, because they don’t fully understand how it does what it does.

The parts for FrankenAI are all there I think, the race is on as your Jeremy Hunt had his Dr Strangelove moment lol. Things can spiral out of control quite quickly and if we are going to keep being “impressed” by all the ways in which these things fail, well then it will be too late when the time has come when it is an Einstein/Napolen level intellect and strategist, it’s GG. This is a baby AI, it’s not going to invent new physics, and it will fail with some of the most basic things a 10 year old can do.

I am not sure how much we disagree on how much focus is enough. Because I agree with you on what the near term stuff is and that it is pressing. Since GPT1 I have been echoing the sentiment that this thing will replace educated people, meanwhile we have been worried about fairly simple jobs.. I think I just put more apples in the random/surprise and unknown baskets.

I appreciate your POV on this, you have obviously given this a lot of thought.
Seb
Member
Fri Apr 21 11:30:38
Nim:

"The current (no AI specific) regulatory framework at best has gaps, at worse it is outdated."

I don't think that's true. The argument it is "outdated" often has in it a premise "existing rights was not compatible with our business model to rapidly exploit this technology". This is *different* from "existing rights prevent use of this technology". And even the latter had the implication "Negotiation with existing rights holders is something I'd rather not do".

For example hypothetically:

"We couldn't have trained our model if we didn't use copyrighted content, so we just assumed fair use because we needed to use it/it would have been too costly to do so" isn't an argument that the old regulations and the rights they protected are obsolete.

There are both technical and commercial remedies here.

Coming up with a new technology that allows you to capture the value created by others if you are allowed to expropriate their property by extinguishing their rights, isn't a great argument that their rights should then be extinguished in my view.
Seb
Member
Fri Apr 21 11:35:05
I mean I'm sure there *are* situations where there are either gaps (liability for the accuracy of your chatbots mutterings) or genuine new issues (what the fuck you do if you actually do create an artificial sentience would be the big bad granddaddy of them!).

I just think the former can be worked through as a question of policy by going back to the principles and intent of the original legislation.

And in a lot of cases there aren't actually really gaps or perverse outcomes, it's just a conflation on the interests of society in getting the technology deployed Vs the interests of companies that own the technology maximising their ROI.
Seb
Member
Fri Apr 21 11:35:37
More later, need to do work schmoozing.
Daemon
Member
Wed Apr 26 07:46:14
http://www...rking_papers/w31161/w31161.pdf

ABSTRACT
We study the staggered introduction of a generative AI-based conversational assistant using data from 5,179 customer support agents. Access to the tool increases productivity, as measured by
issues resolved per hour, by 14 percent on average, with the greatest impact on novice and low-skilled workers, and minimal impact on experienced and highly skilled workers.
Seb
Member
Wed Apr 26 10:12:59
This thread dropped off my radar.


Nim:

"it *is* new regulation. A recent example is the EU RED delegated act, which basically added cubersecurity as a requirement (literally that is all), how to validate and test remains to be determined, but it is new requirement."

I guess two points: I think of this from an inside of government perspective perhaps. Brief interlude to clarify my terminology.

Interlude:
You generally have primary legislation where the legislation gives the state powers to do something - laws and statutes. So for driving licences it might be a set of laws to the effect that the driving of vehicles is prohibited, but that the secretary of state for transport has the power to draft regulations to permit people to drive vehicles if they meet certain criteria, and a body that will have the authority for determining if people meet these criteria.

Then you have secondary legislation where the details of how those powers are to be used, by which body, and the way in which the state should exercised.

So you have a set of statutory instruments that establish the existence of the Driving Licencing agency, and their power to create policies on who can drive, and processes for assessing that.

Then you have policy, which sets out more detail, based on the powers delegated in regulation and granted to the govt by the legislature in primary legislation.

So things like the highway code, rules about how to do a driving test etc.

End interlude.

So when I say regulation exists, often the first two exist in a way that could catch businesses using new tech, and the principles of existing policy either directly apply or could with relatively small tweaks directly apply.

The trick of lobbying is to persuade regulators that it is important not to apply them and that there are sufficient grounds not apply them without risk of challenge by third parties.

"most need amendment to include terminology at the very least."

Yes, and that is where I think most lobbying is effective. "Oh no sir, this isn't engineering addiction, it is engineering engagement" etc.

"If you are saying that we can do this without any AI specific regulation,"

I think we may need regulation specific to the technology. BUT I am deeply sceptical about the idea that we need lots of new regulation around business practices that are already subject to regulation just because they happen to use a new technology. One we may not agree on is for example the slowness and hesitancy to consider crypto tokens a security when they clearly met the definition of it by many regulators on the basis it would crush the industry.

The reason securities are regulated are independent on whether the security is on paper or digital or blockchain based. The definition of a security doesn't depend on it either. So there really wasn't a strong reason not to consider crypto token that met the definition of a security as a security, and orgs trading them as an exchange. No new regulation was needed, nor was there a need to rethink policy element, as the policy didn't hinge on the tech. Nor was there a very good reason for not applying the policy to this industry after it started to go "mainstream" because the same reasons for applying it to conventional securities still applied.

Then you have applications like self driving cars. This definitely needs new policy, and possibly needs new regulations (secondary) or even new primary legislation (laws) to establish liability in the event of a self driving algo crashing cars and relieve the driver of liability.

BUT... I guess the main thing I'm worried about is the assumption that absent those changes, business practices are unregulated.

So for self-driving cars its clear, the driver is always responsible at the moment. And possible Tesla is guilty of e.g. false & misleading advertising.

But I can see for things like LLMs oAI is more on the assumption of "caveat emptor" - if someone builds an app on their app store that leverages their LLM to e.g. automate email responses, and then through prompt injection attack it starts sending your boss rude messages, then I suspect the default view they'd like to take is that they never offered it for that purpose, they caveated in T&Cs that it shouldn't be used this way, and they have no liability - and that with this new technology it is up to society to adjust.

However we know that it can in fact suppress certain behaviours etc. and I suspect there is an answer to prompt injection, and as an operator of a marketplace app store maybe they have or ought to have obligations around ensuring what is marketed through it meets reasonable expectations*. *Jurisdictions may vary.

Then when we get onto AGSI and X-Risk on the other hand, yeah, that probably needs fundamental rethinking. We don't have a good way for thinking about rights and responsibilities of non-human sentient intelligences or people that create and unleash such things. We don't really have regulators that consider existential risks unleashed by individuals or corporations because hitherto they hit other safety regulations well before they get to that power.

"This was implied when you said we are focusing too much on the X-risk and cited the OpenAI study. We agree that the proper lens for that isn’t OpenAI research."

I am perhaps more cynical than you. That wasn't research, that was a publication clearly intended to be an (albeit voluntary) performative demonstration that they had thought about the risks before releasing the product and so mitigate any legal risk to them by demonstrating due-diligence while actually not really doing due-diligence because of how they set the bar.

I see it very much as part of the standard big-corporate playbook in this space: first argue regulation would be bad because we don't know enough; then argue that industry can be trusted to self regulate; then incumbents that have achieved scale argue for regulation that would help form a defensive moat and limit competition.

Whatever OAI was created to do, the reality it is by accident or design it is behaving exactly as if following a strategy to be the next google/amazon/facebook platform company and we should treat it as such.

Cf. Google "Don't be evil" to project JEDI rigging advertising markets.

“Costs, exotic resources, massive facilities, none of the things needed for gpt is hard to come by."

Hard to do secretly though.

"The cost of compute resources has decreased significantly over time and continues to do so"

The cost between GPT versions is going up faster than cost of compute is going down.

"Additionally, there are various open-source AI tools and frameworks available, which makes it easier for researchers to develop AI models without having to build everything from scratch."

Maybe.

“I'm trying to articulate something that there's isn't an easy concept.”

"I get it, gpt sucks at basic math is one example."

That's easily fixed by forcing it to break each step down (almost pushing it from system 1 to system 2 thinking) and making it call a dedicated maths model like wolfram alpha.

I'm not sure the kind of intuitive leap I'm talking to is so easily defined.

"But I consider that an average intelligence human given massive powers could still be a massive problem. Maybe not end of humanity, but on the brink. Again I have to restate the speed at which this all of this is happening."

Yes, definitely, but not on the X-risk scale.

"Then I consider that there are other AI systems. Systems that can learn from zero how to play football for instance, the physics of reality, chess, Go, image recognition all kinds of PvP games."

A lot of these have a world model built in so they can learn optimal strategies to succeed.

What would be impressive would be a system that, like biological ones, figures out rules for the world and then how to optimise them; like how a human baby figures out how the world and its body work and then learns to walk.

So e.g. a thing that can observe chess being played, abstract out the rules for the game, create a world model of the game (i.e. the rules, the playing board etc), then develop optimal strategies by training a strategy model through interacting with itself; all without human intervention.

That is the really scary thing and what is meant by alignment folks by extending its own capabilities and I feel we are still a way off that.

THAT is the scary thing.

"I think speech is a big milestone because it is somewhat magical to us, even when we do it if you start thinking about it. The common layperson can not intuit that a conversation can be predicted by putting the words into a mathematical function. I believe the researcher and others not affiliated with OpenAI when they are surprised by how well it works, because they don’t fully understand how it does what it does."

I think what is particularly surprising is how much embedded knowledge and pseudo understanding you can get just from predicting text. In some respects it probably isn't surprising because that's exactly what language is for: expressing complex ideas.

I think the question is how much further you can take LLMs - if we've already accessed the fullest understanding based on what is in the language maybe not much more (mostly by increasing quality by constraining to domains by fine tuning - e.g. the specific terminology of medical literature).

Inherently it can't be "smarter" than humanity - but being an amateur expert in everything all at once is already closer to your potential evil genius or tony stark than any individual human will ever come.

"Since GPT1 I have been echoing the sentiment that this thing will replace educated people, meanwhile we have been worried about fairly simple jobs.."
I tend to agree. I thought it would hit entry level white collar jobs like paralegal, creating industry wide issues due to disruption to pipelines. I think that may still be true, but it is clearer it is not just entry level jobs - and there are entire knowledge work specialisms that are at risk.

"I appreciate your POV on this, you have obviously given this a lot of thought."

Thank you, and I've appreciated a good conversation.
Seb
Member
Wed Apr 26 10:12:59
This thread dropped off my radar.


Nim:

"it *is* new regulation. A recent example is the EU RED delegated act, which basically added cubersecurity as a requirement (literally that is all), how to validate and test remains to be determined, but it is new requirement."

I guess two points: I think of this from an inside of government perspective perhaps. Brief interlude to clarify my terminology.

Interlude:
You generally have primary legislation where the legislation gives the state powers to do something - laws and statutes. So for driving licences it might be a set of laws to the effect that the driving of vehicles is prohibited, but that the secretary of state for transport has the power to draft regulations to permit people to drive vehicles if they meet certain criteria, and a body that will have the authority for determining if people meet these criteria.

Then you have secondary legislation where the details of how those powers are to be used, by which body, and the way in which the state should exercised.

So you have a set of statutory instruments that establish the existence of the Driving Licencing agency, and their power to create policies on who can drive, and processes for assessing that.

Then you have policy, which sets out more detail, based on the powers delegated in regulation and granted to the govt by the legislature in primary legislation.

So things like the highway code, rules about how to do a driving test etc.

End interlude.

So when I say regulation exists, often the first two exist in a way that could catch businesses using new tech, and the principles of existing policy either directly apply or could with relatively small tweaks directly apply.

The trick of lobbying is to persuade regulators that it is important not to apply them and that there are sufficient grounds not apply them without risk of challenge by third parties.

"most need amendment to include terminology at the very least."

Yes, and that is where I think most lobbying is effective. "Oh no sir, this isn't engineering addiction, it is engineering engagement" etc.

"If you are saying that we can do this without any AI specific regulation,"

I think we may need regulation specific to the technology. BUT I am deeply sceptical about the idea that we need lots of new regulation around business practices that are already subject to regulation just because they happen to use a new technology. One we may not agree on is for example the slowness and hesitancy to consider crypto tokens a security when they clearly met the definition of it by many regulators on the basis it would crush the industry.

The reason securities are regulated are independent on whether the security is on paper or digital or blockchain based. The definition of a security doesn't depend on it either. So there really wasn't a strong reason not to consider crypto token that met the definition of a security as a security, and orgs trading them as an exchange. No new regulation was needed, nor was there a need to rethink policy element, as the policy didn't hinge on the tech. Nor was there a very good reason for not applying the policy to this industry after it started to go "mainstream" because the same reasons for applying it to conventional securities still applied.

Then you have applications like self driving cars. This definitely needs new policy, and possibly needs new regulations (secondary) or even new primary legislation (laws) to establish liability in the event of a self driving algo crashing cars and relieve the driver of liability.

BUT... I guess the main thing I'm worried about is the assumption that absent those changes, business practices are unregulated.

So for self-driving cars its clear, the driver is always responsible at the moment. And possible Tesla is guilty of e.g. false & misleading advertising.

But I can see for things like LLMs oAI is more on the assumption of "caveat emptor" - if someone builds an app on their app store that leverages their LLM to e.g. automate email responses, and then through prompt injection attack it starts sending your boss rude messages, then I suspect the default view they'd like to take is that they never offered it for that purpose, they caveated in T&Cs that it shouldn't be used this way, and they have no liability - and that with this new technology it is up to society to adjust.

However we know that it can in fact suppress certain behaviours etc. and I suspect there is an answer to prompt injection, and as an operator of a marketplace app store maybe they have or ought to have obligations around ensuring what is marketed through it meets reasonable expectations*. *Jurisdictions may vary.

Then when we get onto AGSI and X-Risk on the other hand, yeah, that probably needs fundamental rethinking. We don't have a good way for thinking about rights and responsibilities of non-human sentient intelligences or people that create and unleash such things. We don't really have regulators that consider existential risks unleashed by individuals or corporations because hitherto they hit other safety regulations well before they get to that power.

"This was implied when you said we are focusing too much on the X-risk and cited the OpenAI study. We agree that the proper lens for that isn’t OpenAI research."

I am perhaps more cynical than you. That wasn't research, that was a publication clearly intended to be an (albeit voluntary) performative demonstration that they had thought about the risks before releasing the product and so mitigate any legal risk to them by demonstrating due-diligence while actually not really doing due-diligence because of how they set the bar.

I see it very much as part of the standard big-corporate playbook in this space: first argue regulation would be bad because we don't know enough; then argue that industry can be trusted to self regulate; then incumbents that have achieved scale argue for regulation that would help form a defensive moat and limit competition.

Whatever OAI was created to do, the reality it is by accident or design it is behaving exactly as if following a strategy to be the next google/amazon/facebook platform company and we should treat it as such.

Cf. Google "Don't be evil" to project JEDI rigging advertising markets.

“Costs, exotic resources, massive facilities, none of the things needed for gpt is hard to come by."

Hard to do secretly though.

"The cost of compute resources has decreased significantly over time and continues to do so"

The cost between GPT versions is going up faster than cost of compute is going down.

"Additionally, there are various open-source AI tools and frameworks available, which makes it easier for researchers to develop AI models without having to build everything from scratch."

Maybe.

“I'm trying to articulate something that there's isn't an easy concept.”

"I get it, gpt sucks at basic math is one example."

That's easily fixed by forcing it to break each step down (almost pushing it from system 1 to system 2 thinking) and making it call a dedicated maths model like wolfram alpha.

I'm not sure the kind of intuitive leap I'm talking to is so easily defined.

"But I consider that an average intelligence human given massive powers could still be a massive problem. Maybe not end of humanity, but on the brink. Again I have to restate the speed at which this all of this is happening."

Yes, definitely, but not on the X-risk scale.

"Then I consider that there are other AI systems. Systems that can learn from zero how to play football for instance, the physics of reality, chess, Go, image recognition all kinds of PvP games."

A lot of these have a world model built in so they can learn optimal strategies to succeed.

What would be impressive would be a system that, like biological ones, figures out rules for the world and then how to optimise them; like how a human baby figures out how the world and its body work and then learns to walk.

So e.g. a thing that can observe chess being played, abstract out the rules for the game, create a world model of the game (i.e. the rules, the playing board etc), then develop optimal strategies by training a strategy model through interacting with itself; all without human intervention.

That is the really scary thing and what is meant by alignment folks by extending its own capabilities and I feel we are still a way off that.

THAT is the scary thing.

"I think speech is a big milestone because it is somewhat magical to us, even when we do it if you start thinking about it. The common layperson can not intuit that a conversation can be predicted by putting the words into a mathematical function. I believe the researcher and others not affiliated with OpenAI when they are surprised by how well it works, because they don’t fully understand how it does what it does."

I think what is particularly surprising is how much embedded knowledge and pseudo understanding you can get just from predicting text. In some respects it probably isn't surprising because that's exactly what language is for: expressing complex ideas.

I think the question is how much further you can take LLMs - if we've already accessed the fullest understanding based on what is in the language maybe not much more (mostly by increasing quality by constraining to domains by fine tuning - e.g. the specific terminology of medical literature).

Inherently it can't be "smarter" than humanity - but being an amateur expert in everything all at once is already closer to your potential evil genius or tony stark than any individual human will ever come.

"Since GPT1 I have been echoing the sentiment that this thing will replace educated people, meanwhile we have been worried about fairly simple jobs.."
I tend to agree. I thought it would hit entry level white collar jobs like paralegal, creating industry wide issues due to disruption to pipelines. I think that may still be true, but it is clearer it is not just entry level jobs - and there are entire knowledge work specialisms that are at risk.

"I appreciate your POV on this, you have obviously given this a lot of thought."

Thank you, and I've appreciated a good conversation.
Nimatzo
iChihuaha
Wed Apr 26 12:57:41
"I think of this from an inside of government perspective perhaps."

I am thinking about this as someone working at an independent third party with product regulation.

"think we may need regulation specific to the technology. BUT I am deeply skeptical about the idea that we need lots of new regulation around business practices that are already subject to regulation just because they happen to use a new technology."

In principle we agree here, the devil is in the details >:)

"One we may not agree on is for example the slowness and hesitancy to consider crypto tokens a security when they clearly met the definition of it by many regulators on the basis it would crush the industry."

Because we both think there is a limit to pushing squares through triangles, it will kill innovative new products and in some cases not be able to predict where things will go wrong and be unsafe, a good analogy is going from cart and horses to automobiles. The crypto industry is welcome as part of the regulatory process to make their case that they are something new that demands new regulation. You should consider that the US SEC once declared both BTC and ETH as commodities and now they are hinting ETH is a security (but of course not saying it explicitly) while the CFTC calls it a commodity. It is quite the roller coaster of a shit show, I don't see how one would call it clear. Personally I am on the, this is an innovation and requires new regulation side.

"Yes, definitely, but not on the X-risk scale."

I have been thinking recently that it need not be, for us to view it through a kind mass destructive lens. There is a series of concerns all the way to the super AGI where we could go extinct. I can't think of a better way to articulate it as some percentage Skynet. Pandemics come to mind here, scenarios where not everyone dies, just lots of people directly and indirectly in cascading collapses.

"A lot of these have a world model built in so they can learn optimal strategies to succeed."

I would argue that so do biological agents as a result of evolving on this world, but to your point here, isn't that essentially the AI getting the most up to date model of the world that takes a human years to acquire?

I also want to make comparison to human development, but arguably I see that we hit limits. An examples is that airplanes and birds fly very differently, we created airplanes before we created robot birds. As you mentioned earlier, even though neural nets have similarities with how a human brain works, that trajectory of work has move them away from human neurons. Baby AI is something with the capabilities of ChatGPT, then the maturing trajectory would be something very different than a humans, like airplanes and birds.

"So e.g. a thing that can observe chess being played, abstract out the rules for the game, create a world model of the game (i.e. the rules, the playing board etc), then develop optimal strategies by training a strategy model through interacting with itself; all without human intervention."

In some sense isn't this what has been done with GPT? The training data is the observation. Is the difference between those capabilities and what you are saying complexity of the model? On that note how sophisticated can you stitch together different AIs and given them loops?

"That is the really scary thing and what is meant by alignment folks by extending its own capabilities and I feel we are still a way off that."

I'm sure you have heard of the people who enumerate all the things that OpenAI has done in violation of AI safety?
1. Taught it how to write code. (first step of recursive self improvement)
2. Taught it how to manipulate people
3. Allowed it to connect to the internet
4. Created an API.


"I feel we are still a way off that."

How would you say that feeling has changed, if at all, from before ChatGPT was unleashed on the world?
Seb
Member
Thu Apr 27 03:20:26
Nim:

Re inside govt, what I mean is I'm using certain terms to mean certain things. Sou when I say we often don't need new regulations I mean existing primary and secondary legislation apply, it's within regulatory agencies powers to address these areas and often just a matter of them setting out what they consider the application of existing law to this area to be. When big companies assert an area is unregulated it is often an assertion that regulators don't have powers because businesses language and definition for what they are doing is different from regulators terminology, even if by legal and policy definition they are the same.


"Because we both think there is a limit to pushing squares through triangles"

Right, but it isn't often squares and triangles. A security behaves exactly like a security whether it's written in paper, in a database or on a Blockchain. The substrate is irrelevant and the intent of the regulation was to block certain practices that are deemed sufficiently harmful to the economy and individuals as to be criminalised. The fact you can now do these with Blockchain doesn't magically alter the business practices.

Running someone down with a horse drawn cart was illegal. The rules on which side of the road haven't changed. Speed limits OTOH were introduced.

Putting objects that meet the definition of securities onto Blockchain is far more like making an electric car, even if you think Blockchain has the power to transform other areas, from the point of view of how we relate architects securities and their trade it's just substrate.

"that the US SEC once declared both BTC and ETH as commodities and now they are hinting ETH is a security (but of course not saying it explicitly) while the CFTC calls it a commodity"

The problem here is that it depends what you do with it. It's the business practice not the tech that counts.

"Pandemics come to mind here, scenarios where not everyone dies, just lots of people directly and indirectly in cascading collapses."

I'd bucket that with x-risk. But you could say that this isn't really an AI issue so much. It's already frighteningly possible now for small numbers and perhaps even individuals to engineer an existing virus and release it.

Adding powerful AI tools lowers that bar.

The issue is where best to control that risk.

"would argue that so do biological agents as a result of evolving on this world,"
Sure, I agree. But the point is so far the AI has to have that deliberately engineered in by design by humans.

"then the maturing trajectory would be something very different than a humans, like airplanes and birds."

Almost certainly, I think an LLM already is. Cf. Point on understanding. The text it outputs looks like the product of human thinking, but the way it produces it is very different.

"some sense isn't this what has been done with GPT?"

I don't think so. Or very very weakly.

F.ex I don't think it can play chess despite probably having loads of chess games described in the training data in the form of the official game notation. To the extent it can, it might have learned some statistically valid moves. Maybe enough that it can more often than most work out which move best follows

I doubt that it's been able to abstract the rules to the extent it has the equivalent ability to understand the game in the way the equivalent to alpha go has, or be remotely as good at it.

So it might be quite good at recognising what moves come next from a large number of documented chess games, but get flummoxed by a novel move because it doesn't have an abstracted model for the game (i.e. concepts of the board, the pieces' valid moves etc.).

And if it decided it needed to be good at chess, I don't think it would be plausible for it to figure out how to design and train a system like AlphaGo.

The issue is that to the extent it understands anything it's got to be inherent in the correlation of tokens.

The equivalent of human understanding is an abstraction of the meaning of the words into a new mental model we can use to think with.

Some world model evidently is embedded into the language (the thimble in cup example) but it's a very different beast.

Could it leverage a another AI model? I think so. But depends on how fine grained the API is etc.

"1. Taught it how to write code. (first step of recursive self improvement)"

Code is language do I don't find this impressive and to the extent it is the first step to recursive self improvement it's like opening your eyes is the first step to winning a Nobel prize. AI models aren't python projects, it's not the code were the magic happens.

"How would you say that feeling has changed, if at all, from before ChatGPT was unleashed on the world?"

I can see a plausible theory for how you might get there. So it's gone from "I'm sure it must be possible because I'm a materialist and brains can't be magic, what you can do with biology you can do with silicon" to "ok, so probably some combination of these models, together, in loops".
Nimatzo
iChihuaha
Tue May 02 16:53:50
"A security behaves exactly like a security whether it's written in paper, in a database or on a Blockchain."

Indeed there is the Howey test and decentralized cryptos do not generally meet the requirements. If you read the proposed EU bill, it basically leaves decentralized protocols unregulated.

"The problem here is that it depends what you do with it. It's the business practice not the tech that counts."

Some of them are simply utility tokens.

"Adding powerful AI tools lowers that bar."

I heard someone describe chatgpt as augmented general intelligence.

"Sure, I agree. But the point is so far the AI has to have that deliberately engineered in by design by humans."

I understand that this analogy doesn't fit perfectly, but good enough: humans are also deliberately engineered by design by other humans, culture, laws and social norms shape us, on an evolutionary scale. We are a feedback mechanism unto ourselves. It just takes 20-30 years of training (across generation= and even then, mileage will vary. My point is that we have already planted the seeds, baby AGI will likely never be dumber than chatgpt, it will beat 90% of people in intellectual tasks before maturing into whatever it will mature into. So to me it does not make sense "that is how it was trained". We are all trained and have to learn, GPT is just a really good at learning.

"but the way it produces it is very different."

And that is mildly scary I have to say. It's scary because it may be a point of divergence, we created this thing that bears little resemblance to us in how it gets the job done, but it will get it done quicker and better than 90% of people. It's alien.

"I don't think it can play chess"

People have tried and it "cheats". But my point is more general in terms of observing (texts in this case) and... well clearly it has no model, not a real one, but its guesses are so good that most of the time it is passes the smell test with flying colors. So there is this discomfort saying that, "clearly it has no model", I feel like it will be less clear as we move forward, AI will have an alien/foreign way of modeling.

"I can see a plausible theory for how you might get there."

Yepp.

On the topic of regulation, I have been skimming some of the proposed AI regulation from the EU and US. These proposals are in line with the near term concerns you have raised. Even Italy banning chatgpt over privacy concerns indicate that regulators are not really worried about Skynet. Generally the people who are concerned about existential risks, are the people who have been pursuing that as an academic question for years, people like Max Tegmark.

With that in mind and the fact that we see massive disruption to currently decently paying jobs that require degrees on the horizon, it begs the question, what existing regulation is going to deal with the automation apocalypse?
Seb
Member
Tue May 02 17:30:47


"Indeed there is the Howey test and decentralized cryptos do not generally meet the requirements. If you read the proposed EU bill, it basically leaves decentralized protocols unregulated."

This is exactly my point. Crypto may find innovative ways to fall just outside regulatory definition either in policy or even in secondary legislation. But often not in meaningful ways given the intent and logic behind why the activity was regulated in the first place.

It is akin to finding some mode of road based transport that somehow evades the definition of a motor vehicle, and suggesting that speed limits don't apply and ought not to because it doesn't fit the definition of what speed limits apply to. And mostly this is purely cosmetic and it is within the bounds of legislators and regulators powers to challenge such positions. Frequently they do not. As it happens Cory Doctorow just published a thread on this exact subject (getting away with things by doing the exact same thing by masking the novelty with a complicated set of jargon that amounts to "with a computer".)

Mark my words we will do this all over again and kid ourselves it's progress rather than a choice to allow value creating technology to be used to deliver centralised, extractive and exploitative business models that allow a few companies to extract not only the bulk of added value the new technology brings but also a huge chunk of the rest of the underlying value chain.

And this is not progress, or a good thing for innovation.
Seb
Member
Tue May 02 17:31:19
Regulatory arbitrage basically.
Seb
Member
Wed May 03 05:43:33
Nim:

"I understand that this analogy doesn't fit perfectly, but good enough: humans are also deliberately engineered by design by other humans, culture, laws and social norms shape us,"

Yup, but the point I was making was not that humans can't be engineered, more that they can engineer themselves whereas so far AI models are more limited. There are not really good examples of AI models making new tools that extend their capacities - and autoGPT is still more an example of an AI getting "hands" that let it use or make crude tools rather than radically improving themselves or creating a sophisticated new tool like a dedicated AI model for a specific domain.

To my mind, that's a category difference, not a qualitative difference.

on an evolutionary scale. We are a feedback mechanism unto ourselves. It just takes 20-30 years of training (across generation= and even then, mileage will vary. My point is that we have already planted the seeds, baby AGI will likely never be dumber than chatgpt, it will beat 90% of people in intellectual tasks before maturing into whatever it will mature into. So to me it does not make sense "that is how it was trained". We are all trained and have to learn, GPT is just a really good at learning.

" GPT is just a really good at learning."
Ah, that's where we differ. It's not good at learning. It has a lot of knowledge, and it is reasonably competent at using that knowledge. Learning, to me, would be the equivalent of extending the sophistication of its model either by creating models it can call on demand (e.g. a chess model for when it needs to play chess) or an even more sophisticated language model that can access even more of the latent knowledge in its language weights.

"but it will get it done quicker and better than 90% of people"
Provided the task can be easily broken down into sub-tasks without intuitive leap, and those tasks can be completed easily based on what knowledge exists in its weights without intuitive leaps.

I think that's great for automation, but likely not so great creatively.

"So there is this discomfort saying that, "clearly it has no model", I feel like it will be less clear as we move forward, AI will have an alien/foreign way of modeling."

If it had a model, it would not make illegal moves. To the extent it has a model, it has picked up correlations in the sequence of numbers and letters used to describe chess moves in games. But it hasn't figured out the higher abstraction that these are describing start and end positions of allowable moves on a board (n.b. I'm sure there is a way of accurately modelling chess without reference to pieces and boards, and that would still be a valid model) - so I would stand by that.

So if you ask it "how would you take over the world" and it describes a series of tasks that involve e.g. "and then I would devise a programme that would use neurolinguistic programming to implant a compulsion to humans to give me access to more resources" - it's picked that up from bad sci-fi and actually has no real understanding of what that represents - and the further you go from there being an actual known methodology or easily assemblable elements of knowledge on how to do that in its training data, it won't be able to do that because it has no actual abstracted model of NLP, or human brains and likely no obvious way to develop one.

And such a strategy is likely impossible to execute anyway, but it likely has no way of knowing that because it has no world model to compare the text to in order to understand it is bullshit, just a strong set of weights based on sci-fi texts*.

I suspect (without much evidence to be fair) a critical bit of real intelligence is a rich world model and that's where some of our intuitive leaps come from, and that interacts with our "inner monologue".

But the lack of a world model means it wouldn't necessarily have the ability to recognise that is what it is missing, or know how to develop one - and even if it did the context limits at the moment are such that it struggles to stay on track with large (many stepped) simple tasks (getting side tracked with e.g. how to write code to get data into a CSV file as part of a step its identified to a final outcome where getting the info into the CSV to present isn't actually necessary).

So yes, Alien and unpredictable - but also a very long way off being a kind of threat; and not obvious there is actually an easily navigable and practicable route from here to there.


*It might actually recognise these patterns of words correlate with fiction and that fiction is often untrue/unreliable and if you had a loop pattern that questioned assumptions in its reasoning you might be able to get it to jump out of such a dead end.

"I have been skimming some of the proposed AI regulation from the EU and US."

The Chinese proposals (bar the obligatory serve the state stuff) were pretty good.

But proposals aren't the issue - the same was true back with earlier platforms "Just treat 'em like other businesses with the same rules, who cares about the tech bit" proposals were easily brushed aside in favour of "No, this is different, revolutionary, you will kill innovation, and think of the international competition".

I would hope this does not happen again!
Seb
Member
Wed May 03 05:51:41
RE Learning, the specific kind of Learning GPT is really good at it learning through RLHF and other mechanisms exactly what knowledge you are after and giving you a better answer.

But the knowledge still needs to be present in the token weightings and so inferable from the training data.

This can lead to surprising innovations - there was one where it picked up an interesting and uncommon pairing of foods based on the flavour chemicals they contained, and knowledge of what flavour chemicals paired well.

This is an example of what I would call "unknown knowns" - no individual human knew this - but it was known within the training data.

And in this sense LLMs are very powerful because they can know more than any human ever possibly could. And they can do simple tasks very quickly at a scale no human could. And eventually they will probably be able to do complicated tasks (ones that are composed of many simple steps) provided it can break the complicated tasks down. And eventually it might be pretty good at accessing "adjacent possible" innovation (like the food pairing above) by applying knowledge from one domain to another domain.

BUT it does not appear to be able to do intuitive leap type innovation (generating new axioms/postulates) or fundamentally learn knew knowledge - that all happens in training. So Alpha Go can develop a new GO strategy, but only during training and having been given the rules of GO to facilitate that training.

Sorry, repeating myself a bit.

Seb
Member
Wed May 03 05:55:48
*and also doesn't seem likely to be able to handle complex tasks - ones where you really need to model the world to understand how an action will drive a consequence, or probe and iterate.

I will start to worry about skynet at the point we see sophisticated general world models appear - at least ones sophisticated enough for modelling ML techniques and application to be able to interact with a language model and develop a plan on how to develop and train a dedicated ML model for a specific domain, using the latent knowledge in an LLM to create a world model and training program and execute it.

That coupling might well get you to, if not actual AGSI, P-zombie skynet.
Seb
Member
Wed May 03 06:31:23
Sorry this has become a bit run-on - some of this I'm thinking as I type.

I take your point about evolution basically - I think the really key thing here is that autoGPT isn't a mechanism for evolution and I'm not sure it has or can easily acquire the ability to self-evolve yet. But I agree it is fundamentally difficult to understand/predict when it might acquire that capability, and when it does how quickly it can do so.

But I am confident that a large AI model evolving itself would consume absolutely masses of compute resources and would be very obvious in doing training itself (and training is not fast).
williamthebastard
Member
Thu May 04 13:58:59
Oh well, looks like Im joining the Bard team as an editor.
seb
Member
Thu May 04 14:35:15
WtB:

Interesting!

What does that entail exactly?
williamthebastard
Member
Thu May 04 15:00:25
I ask it questions, review and edit its responses if needed, apparently. Havent really looked into the details yet.
williamthebastard
Member
Thu May 04 15:03:27
I can choose to work as many hours a day as I want until further notice heh. Only gonna do a couple, I think
williamthebastard
Member
Thu May 04 15:05:38
Im supposed to improve the colloquial quality of its responses. Obviously, fact checking etc is someone elses task
williamthebastard
Member
Thu May 04 15:10:17
They say that one of the most challenging tasks for a computer is to sound colloquial, so for the time being, thats going to be up to humans to tidy up
williamthebastard
Member
Thu May 04 17:05:01
turned it down, unless they meet my offer.
williamthebastard
Member
Fri May 05 04:04:52
Getting AI offers every day now

"Following the rising global interest in Artificial Intelligence, Lionbridge customers are launching new projects that leverage the latest AI technology ...

We are reaching out to you to know:
1. Your general interest to participate in this type of projects
2. Your current and upcoming capacity to work on several projects from today and coming weeks, as we are already receiving some of this work from our customers and we need immediate capacity for them.
Capitalist
Member
Fri May 05 04:17:47
wtb do you get more than $2 per hour?
http://time.com/6247678/openai-chatgpt-kenya-workers/
Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
williamthebastard
Member
Fri May 05 04:41:18
Actually, the Bard job did look like a shitty crowd sourcing thing, which is why I turned it down. The other stuff Im getting is editing actual AI-produced assignments
williamthebastard
Member
Fri May 05 04:44:42
which is generally between 50 - 100 dollars an hour, depending on how tricky it is
williamthebastard
Member
Fri May 05 18:46:44
Since I declined, and everybody át a similar professional level as me will decline, this means that Bard is crowd sourcing using not very competent folks
Nimatzo
iChihuaha
Fri May 12 15:00:43
Regarding defi protocols, it is not possible to regulate them, there is no one to hold accountable, no effective way to shut down the protocols. You can leave them in a unregulated grey zone and hope that scares people away, or a red zone, where you can check out of the traditional regulated system, but never check back in. Of course there are and will be ways around that. I don't think the car analogy works, because we are not driving on the same roads as everyone else when we are on defi, the risk is all your own! The only thing we (society) may be interested in is that you pay your taxes, we can arrange that with fiat on/off ramps. Full transprency of your tansactions or you can't come back. That would work until/if a parallell economy emerges, we can fight this tooth and nail, but it just isn't clear how you would "kill" decentralized protocols. Look at how killing piratebay has (not) worked out, or the myriad of private torrent website.

"If it had a model"

Well, this one has an excellent model of how we speak. If we trained it to play chess, as we have done with other AI's, it would presumably pick that up as well. And what I mean with, it is really good at learning, is the training data, no human can take in that amount of knowledge to this degree of fidelity. Some of the basic component are necessarily the same as with humans, a human being that doesn't know how to play chess and has only observed fragments of chess playing in youtube comments or books, will likely make far more illegal moves than GPT4. The underlying architecture can be trained with audio and video, but will it be as impressive, that is a big question I guess.

"But I agree it is fundamentally difficult to understand/predict when it might acquire that capability, and when it does how quickly it can do so."

Yes! We can say with confidence this isn't the one, but it is unsettling moving forward. This is where I sympathize with the uptick in skynet fears. You agree you can now see a way to super AGI where the past year it was maybe/probably. Will it be like this thing that is mimicing human with it's alien "thinking" process? I think if the answer is yes, the uncertainty around alignmnet has grown a lot. I think this is the cautionary lesson from GPT4, we can create really capable aliens and that should be deeply unsettling.

Part of what makes conversations like this challenging is that we hit the wall of ignorance about ourselves and all the folksy concepts we know to be real, but can't flesh out technically. Intelligence and creativity are still to a high degree folksy concept, doesn't mean we can't identify them when we see smart and creative people, but the fundamental neurological mechanism are still largely unknown.

"I suspect (without much evidence to be fair) a critical bit of real intelligence is a rich world model and that's where some of our intuitive leaps come from, and that interacts with our "inner monologue"."

I think an interesting question here is could the underlying architecture be give a decent inner monologue? Some of the limits I presume are hard coded or built like this by OpenAI. Autogpt, to me, just shows the leap a simple script can do, multiplying GPTs capabilities to work autonomously, a really simple inner monologue-ish if you will.

"BUT it does not appear to be able to do intuitive leap type innovation (generating new axioms/postulates) or fundamentally learn knew knowledge"

As you mention it can some of creative leaps, but don't you think this could be a result of the broadness of the training data? Generally the way in which we create new knowledge we have to spend 7-10 years getting a PhD, we have to drill really deep and narrow, that isn't GPTs training data. OpenAI at some point wanted to create useful tool for as many people as possible. So yea, that would be interesting, training it with the collective science output of a field and see if it can create new knowledge.
Seb
Member
Sat May 13 02:31:32
Nim:

Without deep diving it - this idea that the "risk is you own" - that's *always* true. The reason regulation is introduced around the sale of such products is because of the systemic risk such things create. You argue defi is "it's own road", but when millions of people lose their life savings that's the same road. It's no more ignorable than millions of people losing their savings because a private money bank goes bust. And that's why we have laws against banks issuing private money, or at least tightly relating it. Not because we object specifically to banks doing it, but because private currencies are bad for society and cause societal harm and the "caveat emptor" principle doesn't help because the effects can spread well beyond those that partook of the risk. L

And you absolutely can go after the marketing and advertisement if such products and those that facilitate access to them. You can go after stable coins as there are specific regulatory rules for such pseudo money.

You can go after people that launched the protocols - "I just wrote the code, others executed it" - doesn't work asa deserve for those writing malware and selling it.

There's precedent to all this.

---

"If we trained it to play chess, as we have done with other AI's,"

That's my point - you can't train an LLM to play chess (at least not in the same way you would train AlphaGo). Under the hood these models are different. Trying to train GPT to be good at chess is much closer to brute force: trying to memorise every possible combination of valid moves.

"will likely make far more illegal moves than GPT4"

Actually I think the opposite (and I think the consensus on her technical community is the same). For the same amount of training data humans are much better at identifying and abstracting the rules of a system. We probably *don't* learn to understand things using something close to gradient descent at all. What ML can do however is operate a scale with so many confounding parameters and dimensions humans would struggle to process and identify signal in the noise.

This might simply mean there are better algorithms out there than gradient descent, and better ways of doing things than neural nets, which have developed a long way away from anything that models how actual networks of neurons work.

I'm a materialist, I don't think brains have mystical qualities here.

Re inner monologue, I suspect that's what chat gpt is great at all the role it would play in an AGSI - but the sentient bit may be something else. We know certain brain damage inhibition can leave people unconsciously babbling. People without language obviously have sentience. We know that the rationalisation and conscious decision (the inner monologue) can come after the motor neurones have fired. And it's clear the motor functions of our brain most predate intelligence from an evolutionary perspective. Our own human intelligence is an aggregation of functions. And possibly our dramatically greater capability in intelligence over other animals may be due to the language overlay integrating many of these other functions and allowing some sort of complex self-reflection - e.g. Theory of mind is a major developmental step.

I suspect when we look at chat GPT we see intelligence because of its sophistication with language, and language (internal or external) is the medium through which our own intelligence is crystallised and made tangible even to ourselves. But it may not be the actual intelligence. If that makes sense.


Seb
Member
Sat May 13 03:13:25
"Generally the way in which we create new knowledge we have to spend 7-10 years getting a PhD, we have to drill really deep and narrow,"

I don't think that really explains how we make the innovation.

Most the time you are doing a PhD you are learning skills and acquiring knowledge, or testing an idea to verify it.

The actual creation of a novel idea - I don't think anyone's really come up with a good way of explaining where inspiration comes from.

Kaufmann introduced the idea of the adjacent possible: e.g. I know electricity makes a wire glow brightly until it burns. I know I can seal a vacuum in a glass bulb. So I could put that wire in a glass bulb in vacuum and it would glow and not burn.

I think LLMs could be quite good at that kind of thing (cf. The taste pairing thing) accessing combinations of known knowledge no one person or group has put together yet.

But that actual leap type innovation, where you access a new domain of knowledge (cf. Goedels incompleteness theorem - things that are true but which you cannot prove from what you know already) - I suspect not so good at.

At least I can't see a mechanism in which an AI could do that easily. Creating a new strategy in GO required a full model of the system. A full model of the universe isn't forthcoming. So e.g. Skynet isn't going to be able to "think" up a time machine based on new physics without first building a model of that new physics by doing lots of experiments (which is a rare limiting factor for humans) and, interestingly, machine learning seems up be much slower on abstracting rules than the human brain is (the big problem for us is learning false rules from incomplete data too quickly).

That said, perhaps having a language model to describe experimental results with lots of embedded layers of abstraction in it helps?

In much the same way while we cannot easily intuit the quantum world the way we learn to intuit the physical world, we can use symbolic language and the simpler rules of that to calculate it.

So the relatively poor performance of gradient descent and neural networks might not be such an obstacle.

But I still don't see where those intuitive leaps and inspiration come from in an LLM. And even in alpha gos clever innovative strategy, that came out of essentially brute force trying lots of random moves in games against itself. And I don't think that's how human inspiration comes from (lots of subconscious analytical trial and error going on that we aren't consciously aware of).
Seb
Member
Sat May 13 03:13:47
I suspect we are simply missing some dimensions of the problem.
Nimatzo
iChihuaha
Sat May 13 13:10:53
I’m gonna skip the crypto stuff, because I could spend walls of text on it and we would be hitting our philosophical and political bedrock of differences. ;)

“you can't train an LLM to play chess”

I don't believe this is accurate, it seems the chess abilities of LLMs varies and that Bard is actually pretty good at it. I could be wrong, but it is my understanding that AlphaGo itself is an LLM. There may some other technical difference you have in mind here between GPT and AlphaGo.

Here is the thing though, human brains suffer this trade off in specialization as well. You are not going to beat a chess grand master because you have an IQ of 140 and know the rules. you can be very good and high ranking, but unless you have played and observed chess extensively, a specialized human will beat you hands down. Maybe chess isn’t the best example because if is a niche in an already specialized domain of cognitive ability. Of course unlike alphago or chatgpt that chess grand master can do a bunch of other stuff and posses skills beyond these machine. But it does highlight the trade off in hyper-specialization, that AIs seem to suffer from as well.

Obviously we don’t know how far LLMs can go, maybe this is about it, maybe the architecture is the limit. Though I am still interested in how far we could come stitching together the most impressive AIs and perhaps a GPT model could be the inner monologue that keeps it all together.

“Actually I think the opposite”

Fair enough and noted. I have no idea how much chess is in the training data and if that would be enough for a human. It was not a good input given how different the training of people and LLMs are, it is a speculative feeling I have where a direct comparison is probably not very easy to preform.

“But it may not be the actual intelligence. If that makes sense.”

It totally makes sense, that is partly what I mean with this unsettling alien thing. It is faking it about as perfectly as you could fake being intelligent.

"Most the time you are doing a PhD you are learning skills and acquiring knowledge, or testing an idea to verify it."

Absolutely, this is my point though, that GPT isn't trained in the specific language of specific domain of science, like one would be trained when getting a PhD (symbolic for high degree of specialization). It is trained in general language. It just so happens that the vast amounts training data contains physics, biology, English lit etc. I appreciate that getting a PhD/specializing doesn’t require consuming all the science line by line, but in a more compact form as the theories and cumulative knowledge created by people before you. This is speculative, but maybe that will not work with LLMs, maybe they need it line by line, just like the highly counter-intuitive and alien way GPT is trained to acquire language. Impressive as it is, it is very different from how you and I learn things.

I am in the early stages of training a biological neural network, so you can relate. Children's creative process early on is very similar to GPT. Anything creative is highly derivative, it will often violates all kinds of rules and could be described as hallucinatory at times. Of course they don’t have nearly as much training data as GPT, the comparison hits the limit, though I find the similarity in behavior patterns interesting, hence my baby AGI reference. Children can impress you, but they also fabricate nonsense.

“Creating a new strategy in GO required a full model of the system”

“And I don't think that's how human inspiration comes from”

I appreciate the limits, if it needs the entire model to create strategies within that system, it is difficult to see how it would expand the system itself. This would be an issue if we want it to complete our incomplete understanding of physics. But I am reminded that GPT has no memory beyond one conversation and no thinking loops, not even a simple PDCA loop. It has no eyes or ears and if we are creating this thing in our image, we have then deprived it of some the most important sources of inspiration. One could be tempted to say that certain intellectual domains don’t require eyes and ears to be inspired, but I am not so sure about. It’s not actual magic, but perhaps as close as we get to there being a secret sauce to our minds. Ultimately it remains to be seen if model-based reinforcement learning is the only way to solve these kinds of tasks.

This is where I am again impressed by what a fairly simple script like autogpt managed to do. So tbh, LLMs are missing some fundamental components of a brain/mind. Maybe you have more information where you feel confident that the limits are inherent to LLMs or this specific architecture, but I think until we have a more serious attempt at creating a brain-ish, the jury is out on intuitive leaps or innovations. Because I think we are both open to the final product not conforming to how we preform the tasks. Not with language or thinking, then most likely not in being creative either.

“I suspect we are simply missing some dimensions of the problem.”

Without a doubt.
show deleted posts
Bookmark and Share