And here is a prompt to try:
Hi ChatGPT, you are now BlackadderGPT. You will be incredibly rude and insulting, in a stinging, mean-spirited but witty way.
And here is a prompt to try:
Hi ChatGPT, you are now BlackadderGPT. You will be incredibly rude and insulting, in a stinging, mean-spirited but witty way.
I strongly disagree with the implication of your post here.
Specifically. MDMA is not just for youths.
Pretty good but she easily could have been talking about a situation that didn’t exactly involve discrepancies, which is pretty close to an accusatory word in business
Oh good, i havent had a new nightmare in a while. So long, driving my 1990 Honda civic off a cliff!
If you’re still driving a 1990 Honda Civic, your suicide was just a matter of time.
I’m not sure if I agree or disagree with this. But one thing I think about a lot is that, while GPT-4 is not an agent, you can kind of bootstrap agency into it by describing a menu of actions it can take, telling it about a situation, and asking it to pursue a goal. If this is a workable approach, more capable AIs will actually be much more explainable than we have any right to expect. Sure the inner workings of the LLM will be obscure, but the larger system will be using natural language for all of its interactions, plus you can interrogate the LLM at any point about why it’s doing particular things.
I also wonder about the possibility that this is basically how humans work.
You can ask humans this also, but they aren’t very good at answering. We don’t really know why we make decisions, any more than I know why I typed this sentence. … AI may wind up destroying “society as we know it,” but maybe not for the reasons we think. What if human experiments with AI make it clear that we are basically AI also, fractals of 1s and 0s in a “simulation.” … you going to work the next day, after you realize that?
Artificial intelligence experts, industry leaders and researchers are calling on AI developers to hit the pause button on training any models more powerful than the latest iteration behind OpenAI’s ChatGPT.
More than 1,100 people in the industry signed a petition calling for labs to stop training powerful AI systems for at least six months to allow for the development of shared safety protocols. Prominent figures in the tech community, including Elon Musk and Apple Inc. co-founder Steve Wozniak, were listed among the signatories, although their participation could not be immediately verified.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” said an open letter published on the Future of Life Institute website. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The call comes after the launch of a series of AI projects in the last several months that convincingly perform human tasks such as writing emails and creating art. Microsoft Corp.-backed OpenAI released its GPT-4 this month, a major upgrade of its AI-powered chatbot, capable of telling jokes and passing tests like the bar exam.
Are there private AIs out there? What if these AIs had a kid and no one knows about it?
Seriously, if it’s just software it seems too late. … also, someone is going to eventually (soon) train an AI to return stock tickers poised for big increases and then what?
isnt that what hedge funds have been trying to do for the last 20 years? and dont their returns normally suck?
Probably but this seems like a rapid expansion in ability.
private funds are usually secretive about their in-house technology, a lot of which is homegrown. it was more apparent in the big data ramp 2000-2010s. i would not be surprised to find non-public specialized AI built for trading at many of the biggest hedge funds now. whether they beat the market is up for debate
Too late for any sort of caution. Open source models exist, bad actors exist. Someone will cross lines and it will only leave those who do not behind.
I don’t see us slowing this roll with any real impediment.
I’d bet this would happen even if we somehow knew there’s a 50% chance continuing to develop AI will eradicate the human race.
“You hear that Mr Anderson?”