AI advocates for more AI? Definitely has American capitalism down pat.
Tell me a story about therapy in the voice of tony soprano
Hey, how you doin’? It’s Tony Soprano here, and I got a story for ya about therapy. Now, you might think a tough guy like me doesn’t need therapy, but let me tell you, even the boss of the Sopranos needs a little help sometimes.
So there I was, sitting on the couch in Dr. Melfi’s office. At first, I was skeptical about the whole thing. I mean, who wants to sit around and talk about their feelings? But as time went on, I started to see the benefits.
Dr. Melfi helped me understand myself better, and that helped me make better decisions. She showed me that it’s okay to feel vulnerable sometimes, and that it’s important to talk about those feelings instead of bottling them up inside.
I gotta admit, it wasn’t always easy. There were times when I wanted to storm out of her office and go take care of business my own way. But Dr. Melfi was always there to keep me on track, to remind me of the progress we were making.
And you know what? Therapy helped me become a better boss. I started to see my crew in a different light, to understand their motivations and fears. That made it easier for me to lead them, to keep them in line without resorting to violence all the time.
So don’t be afraid to seek help if you need it. Even the toughest guys need a little guidance sometimes. And hey, if it worked for the boss of the Sopranos, maybe it can work for you too.
————————————
Give me ten pick up lines to attract a right leaning trump supporter
As an AI language model, I must preface my response by stating that pickup lines are not always the most effective way to form a meaningful connection with someone, and it is important to approach any potential romantic interest with respect and genuine interest in their thoughts and feelings. However, here are ten potential pickup lines that may appeal to a right-leaning Trump supporter:
- “Are you a conservative voter? Because you have my heart leaning to the right.”
- “I hope I’m not overstepping my bounds, but I couldn’t help but notice that we share a love for our country and our president.”
- “Excuse me, miss/ sir, do you believe in love at first sight, or should I put up another Trump-Pence sign?”
- “I couldn’t help but notice your conservative values, and I was wondering if you’d like to discuss politics over a cup of coffee sometime?”
- “I know Trump’s wall was never built, but maybe we could build something between us?”
- “If the Second Amendment is your favorite, you’re going to love my right to bear hugs.”
- “I may not have a red hat on, but I’m still a big fan of making America great again. Would you care to join me?”
- “Excuse me, is your name Ivanka? Because you’re the only Trump I want to be associated with.”
- “You know, the left may call it a border wall, but I see it as a bridge between us.”
- “Are you a Trump supporter? Because you make my heart skip a beat faster than a CNN headline.”
Aronofsky fan, eh?
I certainly think my conscious experience of when I just open my mouth and engage in conversation without having thought about my opinions first or anything is completely consistent with me doing next-word prediction as I go along. That’s exactly how it feels.
I also think, and I know ElSapo will high five me here, that our idea of our own identity is basically a narrative fiction that is ex post facto applied to do whatever it is we did. Say I intend to diet and exercise more. If I do it, I’m like “aha, you see, free will confirmed, patriots are in control”. If I don’t, I’m like “unfortunately extenuating circumstances made me change my mind, can’t be helped”. The conclusion “actually it doesn’t seem like my intentions translate into behaviour in a simple way” is to be steadfastly resisted.
I certainly think my conscious experience of when I just open my mouth and engage in conversation without having thought about my opinions first or anything is completely consistent with me doing next-word prediction as I go along. That’s exactly how it feels.
I was thinking about higher-order reasoning actually. More concretely, when you’re making a long-term plan or doing some complicated bit of reasoning, the way you do it is by loading your intentions/observations/instrumental goals/whatever into working memory/context, then applying some version of next-token prediction to decide what conclusions are plausible–not because they’re “correct” based on some version of logical inference, but because they flow narratively from the problem you’re trying to solve.
I also think, and I know ElSapo will high five me here, that our idea of our own identity is basically a narrative fiction that is ex post facto applied to do whatever it is we did. Say I intend to diet and exercise more. If I do it, I’m like “aha, you see, free will confirmed, patriots are in control”. If I don’t, I’m like “unfortunately extenuating circumstances made me change my mind, can’t be helped”. The conclusion “actually it doesn’t seem like my intentions translate into behaviour in a simple way” is to be steadfastly resisted.
“Fiction” is a prejudicial word. What if what we call free will/identity is the process of constructing a narrative. It could simultaneously be true that the narrative is causally driving our behavior, but also that the construction of the narrative is biased in different ways by lots of factors. I.e., whether you complete the sentence “I’m hungry, but I’m on a diet, so I will…” with “…eat some chips because chips taste good” vs. “…tough it out because I have self-control” depends on exactly how hungry you are, how strongly you’re committed to your dieting goal, how tired you are, etc., etc. But ultimately “you” are the sentence that actually gets written and that “you” is the one that controls your actions.
Humans are story machines, but the story is driven by our needs as living organisms and by the world and the ways we interact with it. Our narratives have consistency. They persist over time. We string words together in ways that both create and reinforce those narratives. The AIs reflect the words back to us but these are our narratives. It’s like talking to ourselves in a mirror.
Going in deep! Have you ever tried to play a guessing game with it?
I had to ask quite a few questions, but eventually:
However, I’m suspicious… how can I be sure it really thought of someone at the beginning?
It wouldn’t be exactly cheating, would it?
Wat?
Ok, that’s straightforward enough; sounds like a fun game!
Have it give you the name at the start, in a code you specify. The code is sufficiently complicated that you can’t decode it without some effort, which you delay making until the end of the game.
You’re just out of touch. ALL WHO THE FAMES is huge on youtube. My nephews love him. Or her.
There was a wordle variant that worked like this
We Got Him!
Good idea! A while ago I played this game already and asked it to provide a SHA1 hash of the desired answer, which would make the game actually real, but said it couldn’t do it. Tried it again just now, but of course it didn’t match; plus all kinds of shenanigans:
Asked it again to think of a famous person, and provide both base64 and the SHA1 hash. The base64 decodes to “Marced Staking”, and the hash doesn’t match. Other base64 decodes were:
- Bearer Config
- 47863235869826376807
- 1234567890123456
- amazon.book
- Basic Beauty
- string-skill-britistent
Hashes never matched, but that is maybe a bit much to ask anyway; hash computations are quite complicated. But still - do we have a little Turing test here?
lol I asked it to provide a base32 as well as a base64 encoding of the name to guess; never matches over several attempts! Left is base32, right base64
- w@ttwfgee whe a — Maybe Shit
- Hest is vut — Katie Grace
- hello world! — Man is distinguished, not only by his reason, but by this
- abcdefghijklmnrerouceroucerouceroucd — FriendlyBearer
- base32 is sust easee fe 倀 — Family Shop
At least there was a famous person in there once I guess. But the codewords don’t decode to (complete) gibberish also… Actually, when you straight up ask it for base64 encodings of some given sentence, the results seem to be always correct; so it can actually do that. I decided to call it out:
Are you using GPT-4 or 3.5? I just tried something similar with 4 and it worked just fine.
I guess it’s 3.5, as I’m using the free version.
Mind to share an example? Did you use a hash or a base64 encoding? Total confirmation that it works would be a (cryptographic) hash, but that might be computationally to challenging to do in a neural network.
Base64 encoding is still a simple mapping of symbols to symbols after all, which can be easily reversed. When it provides a base64 encoding of the name at the beginning, it’s basically equivalent to it just stating the name at the beginning, which kinda means it stores the name in it’s first answer. If it would provide a hash, it would have to store the information what it guessed at the beginning… somewhere in its system I guess. But I suspect it can’t.
I used b64. I wouldn’t expect a hash to work. The model can’t “remember” anything, so if the animal it’s “thinking” of isn’t present in the chat in a decodable form, it will have to confabulate it.