Go back to previous topic
Forum nameGeneral Discussion
Topic subjectWhat journalists/outlets should I pay attention to about ChatGPT/AI?
Topic URLhttp://board.okayplayer.com/okp.php?az=show_topic&forum=4&topic_id=13483533
13483533, What journalists/outlets should I pay attention to about ChatGPT/AI?
Posted by obsidianchrysalis, Tue Apr-25-23 03:36 PM
My sister asked me last night about ChatGPT and whether or not she should pay attention to it. She's a copy editor and wants to be up on the latest trends in the news.

I'm her resident techie, but to be honest, outside of reading a bit about its basic features and how wildly inaccurate it can be, I'm unsure about what to think about it.

So, I want to ask y'all where did you pick up your knowledge about ChatGPT and/or AI?
13483534, There's a new TED Talk about it from one of the creators:
Posted by shockvalue, Tue Apr-25-23 03:58 PM
https://www.youtube.com/watch?v=C_78DM8fG6E&pp=ygUQdGVkIHRhbGsgY2hhdGdwdA%3D%3D
13483537, A TED Talk is pretty much the worst way to learn about anything
Posted by Rjcc, Tue Apr-25-23 04:08 PM

www.engadgethd.com - the other stuff i'm looking at
13483536, I work with it as part of my job.
Posted by Triptych, Tue Apr-25-23 04:05 PM
Curious what your immediate question are...
13483538, I'm biased, but my coworker
Posted by Rjcc, Tue Apr-25-23 04:13 PM
James Vincent, is pretty tuned in to AI stuff.

I would suggest reading everything he's written, going back a couple of years tbh. most of the things we're discussing now are implementations of stuff that everyone in the field knew was coming.

tl;dr I'd pick these

(from 2019) OpenAI’s new multitalented AI writes, translates, and slanders
https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2

(2020, about GPT-3 which is what blew up at the end of last year when they opened it up to everyone) OpenAI’s latest breakthrough is astonishingly powerful, but still fighting its flaws

https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

The future of AI is a conversation with a computer
https://www.theverge.com/22734662/ai-language-artificial-intelligence-future-models-gpt-3-limitations-bias

and from February, to help you understand what these things are and are not, despite the hype. also explains a lot about the weirdness of using them

Introducing the AI Mirror Test, which very smart people keep failing
https://www.theverge.com/23604075/ai-chatbots-bing-chatgpt-intelligent-sentient-mirror-test




and Satya Nadella talking about why Microsoft is pushing so hard
Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why
https://www.theverge.com/23589994/microsoft-ceo-satya-nadella-bing-chatgpt-google-search-ai





I'm off this week but I've suggested we write an AI explainer on a lot of the weirdness of the chatbots, but it isn't assigned yet.

We have a stream of stories that focuses pretty narrowly on the chatbot stuff from openAI, Microsoft, and Google (and a few others if it's notable enough) so it won't necessarily cover the stuff like midjourney, but it's enough to keep you informed, and you can subscribe specifically to those updates in RSS

If you're willing to take the time, I'd suggest starting at the oldest one
https://www.theverge.com/23610427/chatbots-chatgpt-new-bing-google-bard-conversational-ai/archives/7

and going forward reading the headlines / digging into the features and you should be pretty well caught up on what the fuck all this stuff means. if you have a q lemme know, I'm not an AI expert but I talk to James and a lot of other people all the time.


www.engadgethd.com - the other stuff i'm looking at
13483541, for your sister specifically
Posted by Rjcc, Tue Apr-25-23 04:27 PM

>She's a copy editor and wants to be up on the latest trends in the news.

I was just having this conversation with my sister, who was saying she used chatgpt to write something for work and we were discussing it.

in terms of writing and content generation generally, AI has been mostly misrepresented (for a lot of reasons and by a lot of people), and it just isn't good at it.

however, for writing, writing the content you want to make, and then using AI (for example, Grammarly) to go over it, to check for errors, tone, or other stuff, is not a foolproof approach and it has a lot of ts own issues, but it's probably the best way to enhance what you do with the technology that currently exists.

www.engadgethd.com - the other stuff i'm looking at
13483545, I agree with this
Posted by handle, Tue Apr-25-23 05:53 PM
If you prompt it with something like "Write a promotional email about a concert were are having on Jaunty 1st, 2024" it'll spit on generic bullshit.

But if YOU write something about the upcoming concert and then prompt with something like:
Please rewrite the following input as a:
promotional email
short blog entry
in a more conversational tone
as a product description
as a newsletter article
in 50 words or less
a help desk article
with a more professional tone


Then it'll output something based on your input that you might be able to incorporate into your original idea.

Basically as an editor or as someone who is prompting YOU.

That's how I see it right now.



13483564, some links
Posted by fif, Wed Apr-26-23 02:27 AM
journalists way behind. for journalists, kelsey piper one of the best on ai: https://www.vox.com/authors/kelsey-piper

fairly random list here...some are people who primarily show practical uses...some theory, some top engineers making it happen..click away...

https://twitter.com/repligate
https://www.assemblyai.com/blog/how-chatgpt-actually-works/
https://axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html
https://shakoist.substack.com/p/does-the-textual-corpus-for-large
https://chat.openai.com/
https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism#Appendix__Testimony_of_a_Cyborg
https://www.elidourado.com/p/heretical-thoughts-on-ai
https://thezvi.substack.com/p/ai-1-sydney-and-bing
https://dynomight.net/scaling/
https://en.m.wikipedia.org/wiki/Predictive_coding
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
https://twitter.com/itsPaulAi
https://twitter.com/perrymetzger
https://twitter.com/goodside?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
https://twitter.com/mcxfrank/status/1640379247373197313
https://twitter.com/jonst0kes
https://twitter.com/_Borriss_
https://twitter.com/SullyOmarr/status/1645828811680800768
https://thezvi.wordpress.com/2023/04/18/the-overemployed-via-chatgpt/
https://thezvi.wordpress.com/2023/04/13/ai-7-free-agency/
https://twitter.com/Plinz?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
https://twitter.com/ylecun?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
https://twitter.com/karpathy?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
https://twitter.com/gwern
https://every.to/napkin-math/6-new-theories-about-ai
https://twitter.com/karpathy/status/1644183721405464576
https://twitter.com/AndrewYNg
https://twitter.com/BrianRoemmele
https://twitter.com/atroyn
https://scottaaronson.blog/?p=7174
https://www.youtube.com/watch?v=Yf1o0TQzry8&t=188s
https://twitter.com/bengoertzel?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
https://twitter.com/emollick?lang=en
https://astralcodexten.substack.com/p/####-simulators
https://generative.ink/posts/simulators/
https://gwern.net/scaling-hypothesis
https://www.youtube.com/watch?v=xoVJKj8lcNQ
https://lifearchitect.ai/ravens/
https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/
https://scottaaronson.blog/?p=7042
https://www.assemblyai.com/blog/how-chatgpt-actually-works/
https://twitter.com/samuelwoods_/status/1642889718336479233
https://twitter.com/samuelwoods_
https://marginalrevolution.com/wp-content/uploads/2023/03/GPT4scores-768x513.jpg
https://generative.ink/posts/simulators/
https://astralcodexten.substack.com/p/####-simulators

og paper
https://arxiv.org/abs/1706.03762


13483565, really recommend
Posted by fif, Wed Apr-26-23 02:33 AM
people use chatgpt. worth it to me to pay $20/month for gpt-4. current free version is 3.5 and is amazing enough. it is incredibly useful. i use it mostly for learning, synthesizing ideas...lot of more practical/job applications. ask it questions in an area you have domain expertise. prod it to really go in depth. see where it fails. also...ask it to cite sources. if things dont seem right, ask where it got the info, ask it "where were you wrong". ask it for counter-arguments etc. it is mindblowing tech to me. check it out for yourself. such a more efficient way to interact with info and machines already. future's gonna be something else...
13483769, there are several people on this list who are... uh extremely racist
Posted by Rjcc, Sat Apr-29-23 12:49 AM
fwiw

and ethan mollick is a fuckin idiot. useful for knowing what dumb people are saying, but absolutely worthless for analysis, he just passes on whatever is popular


www.engadgethd.com - the other stuff i'm looking at
13483569, created an account and started playing around
Posted by jimi, Wed Apr-26-23 08:03 AM
later on I seen a quick tutorial on it and learned some new stuff


it's like having a personal AI assistant, definitely a tool to help you be more productive.
13483574, it depends on what you're using it for
Posted by hardware, Wed Apr-26-23 09:10 AM
asking it for information on stuff is dangerous

but if you need it to like help tune up sentences or something where you might just need a more active thesaurus or something i think its good

but its far from doing human work for you
13483580, i use it at work
Posted by GNT1986, Wed Apr-26-23 10:28 AM
i asked it to describe the benefits of work from home and dropped its response into a survey at work.

executive leadership has us on 3 at home/2 in office days and shifting to 2wfh/3 in office effective july 1st. they plan on scaling it back unless there is backlash/feedback, so i'm giving them feedback.

also posited that people with options (like myself) will dip and they'll be left with an average ass work force.

doesn't sound like a recipe for a workforce that strives for excellence and the benefit of the public good to me, but, eh.
13483692, RE: What journalists/outlets should I pay attention to about ChatGPT/AI?
Posted by Original Juice, Thu Apr-27-23 01:37 PM
Lex Fridman is pretty knowledgeable as he supposedly has worked in the AI field.

In a fairly recent episode of his podcast, he interviewed Sam Altman, CEO of Open AI (GPT-4, Chat-GPT, etc.).

https://www.youtube.com/watch?v=L_Guz73e6fw
13483738, i'm not a fan of Lex but....
Posted by dapitts08, Fri Apr-28-23 10:15 AM
this episode and the one with Max Tegmark were must listens for me when it comes to ai.

#371 – Max Tegmark: The Case for Halting AI Development
https://lexfridman.com/max-tegmark-3/






13483768, I wouldn't advise listening to Lex Fridman on anything
Posted by Rjcc, Sat Apr-29-23 12:47 AM
and Sam Altman is worth hearing from since he runs openAI, with the stated caveat that he's one of the dumbest motherfuckers the world has ever known

www.engadgethd.com - the other stuff i'm looking at
13483695, I use it for webinar descriptions
Posted by legsdiamond, Thu Apr-27-23 01:46 PM
It gives me a nice first draft and then I tweak it to match what we are addressing in our webinar.
13483737, i've been using ai to assist me with writing copy for over a year
Posted by dapitts08, Fri Apr-28-23 10:05 AM
i started out using jarvis and copy ai.

once chatgpt launched i dropped those and started using it exclusively. i pay $20/month for access to chatgpt4 and it is 100% worth it.

i use for much more than copy now. it's great for brainstorming and exploring ideas. outlining. summarizing. etc.

the key is to see it is an assistant and not a replacement. i never take what is returned at face value. i use my own reasoning and understanding to shape what is returned. critical thinking is a must. it's not a magic bullet for anything but it does often speed up my process for whatever task i'm using it for.

my suggestion for your sister is to start with understanding how prompting works and what is possible. then decide if adding chatgpt to her tool set is useful. my guess is that she will find the answer to be yes.

imo this is the best free resource to learn prompting right now:

https://learnprompting.org/docs/intro
13483752, Curious what kind of differences you've noticed between v3 and v4
Posted by Triptych, Fri Apr-28-23 01:10 PM
.
13483758, the biggest improvement for me was...
Posted by dapitts08, Fri Apr-28-23 03:05 PM
a noticeable difference in hallucinations and repetition.

redirection for responses that don't hit the mark seems improved as well.

also the increased amount of text that can be included in the prompt is super useful.

i'm also getting better at prompting so it could be a factor at play here as well.

i've played around with the plugins a bit but so far no use case has stuck for me. i'm sure that will change as more are introduced.

i just got access to the alpha for browsing and code interpreter today. browsing uses 3.5 so it will be interesting to see if i notice any difference in the response now that i've been using 4 for awhile.
13483831, this is interesting. just started playing with it. asking it some stuff i know
Posted by poetx, Mon May-01-23 10:19 PM
to see what it knows. and stuff i don't know to see if it sounds plausible.

also toying with how i can use it in IT.

i've asked it some fairly lightweight questions about writing a function in python, and some stuff in excel, and the answers were concise and helpful.


peace & blessings,

x.

www.twitter.com/poetx

=========================================
I'm an advocate for working smarter, not harder. If you just
focus on working hard you end up making someone else rich and
not having much to show for it. (c) mad
13483833, you can use it to quickly create a set of bad answers to questions
Posted by Rjcc, Tue May-02-23 03:48 AM
They'll look good at first glance, but they won't be good.


www.engadgethd.com - the other stuff i'm looking at
13483900, Give an example of what you mean nm
Posted by fif, Wed May-03-23 11:21 AM
...
13483910, RE: Give an example of what you mean nm
Posted by Rjcc, Wed May-03-23 12:31 PM
https://twitter.com/emollick/status/1634594580694704128

first one about bsod -- those are answers, yes, but will they help you?

probably not, the compatibility checker literally never identifies shit, the odds that the problem is an externally connected device you can easily unplug is basically zero, and if you knew which apps were problematic you'd have uninstalled them

this is time wasting advice

idk shit about slime mold computers

you know where you can find the highest rated flatware sets on amazon?

on amazon

www.engadgethd.com - the other stuff i'm looking at
13483928, what would be a better answer to the bsod question?
Posted by fif, Wed May-03-23 03:43 PM
how would someone who fields questions like that for a living respond?

in your example, bing also provides links to various sites. are these more or less helpful than what would come up if he had entered the same search into bing or google (with no LLM)

your answer of 'go to amazon' for everything. well, yea that can be a pretty good way to go but isn't it good to stay open to the possibility that good things can be bought at good prices online in places other than Amazon? example: i bought a repair manual for an old car recently. cheapest copy on amazon was $30, found one on another site for $6.

the 'using bing to comparison shop' use case is definitely early days. but i expect this will evolve quickly. instead of searching amazon and scanning a few top hits, trying to assess from reviews etc, i think this process will (or can if allowed) become much more streamlined--much more like walking into a shop selling flatware and asking someone who works there to help you find exactly what you need.

the thing with the llms is that you can push them right along with your specific question. you don't have to google and then open a bunch of tabs and scan through them to find what you're looking for. if you have a tech problem, you don't have to click through a bunch of old support forum posts (or whatever) hoping someone has addressed your problem in the past...you can (often) get the specific answer right at your doorstep.

i've been working with spreadsheets some and instead of reading long tutorials, i've been able to prompt it with natural language (as if asking a spreadsheet wizard) and the answers gpt4 has provided have saved me a ton of time.


13483931, here is an example
Posted by fif, Wed May-03-23 04:04 PM
of how llms can save you a ton of time learning things. (bear in mind, there hasn't been a ton written about de la soul and tommy boy and so on...so on topics with tons written...they can be that much more useful--ie psychology/therapy; i just cut my knee--what do i do?; buying a home and understanding mortgage rates etc; car repair; scientific subjects

i think it does pretty well below. someone who knows nothing about de la...in minutes walks away knowing more
https://sharegpt.com/c/AUS1huI
13484083, there's nothing about this sequence
Posted by Rjcc, Sun May-07-23 02:31 AM
that reflects what a person who doesn't know anything about de la soul would be asking.

the fastest way to find out what happened to de la soul would be clicking on an interview with them in the new york times or rolling stone.

not asking questions that, as a person who didn't know who de la soul is until just now, wouldn't be asking, so you can get a rewritten summary one step at a time.

it's a terrible method that wastes your time, even if it had access to current data.





www.engadgethd.com - the other stuff i'm looking at
13484082, RE: what would be a better answer to the bsod question?
Posted by Rjcc, Sun May-07-23 02:27 AM
>how would someone who fields questions like that for a living respond?

what's the error code on the blue screen?
Search google to see if there's an easy answer.

if there isn't, take it to someone who you can pay to check it out or try restoring your system. pretty much anything else further than that is going to be a waste of your time.

this is all in the chatgpt answer, but it's buried among a ton of other shit because chatgpt doesn't know what wasting your time means, it's just saying stuff.



>your answer of 'go to amazon' for everything. well, yea that can be a pretty good way to go but isn't it good to stay open to the possibility that good things can be bought at good prices online in places other than Amazon? example: i bought a repair manual for an old car recently. cheapest copy on amazon was $30, found one on another site for $6.


his prompt was specifically about what was the highest rated set on amazon, I wasn't limiting it that way.

you'd still be better off just googling

>the 'using bing to comparison shop' use case is definitely early days. but i expect this will evolve quickly. instead of searching amazon and scanning a few top hits, trying to assess from reviews etc, i think this process will (or can if allowed) become much more streamlined--much more like walking into a shop selling flatware and asking someone who works there to help you find exactly what you need.


no, it isn't. because what you'll get is a sponsored answer (which is also what a salesman gives you). if you were planning on scanning reviews, why would you take chatgpt's word for what the reviews say? the first time you buy something and it turns out its advice was bad (even if its advice was perfect, things go wrong) why would you ever take that advice, which is slower and less precise, ever again? you wouldn't.

but what people want is to know what other real people like.

which is not a business model.


>the thing with the llms is that you can push them right along with your specific question. you don't have to google and then open a bunch of tabs and scan through them to find what you're looking for. if you have a tech problem, you don't have to click through a bunch of old support forum posts (or whatever) hoping someone has addressed your problem in the past...you can (often) get the specific answer right at your doorstep.


the thing is the answer you get is in no way reflective of facts. it's just what words the thing thinks sound good if you place them next to each other.


>i've been working with spreadsheets some and instead of reading long tutorials, i've been able to prompt it with natural language (as if asking a spreadsheet wizard) and the answers gpt4 has provided have saved me a ton of time.

I bet $20 that one hour training session with a human would save you more time than asking chatgpt ten time


www.engadgethd.com - the other stuff i'm looking at
13484265, Put your bifocal on
Posted by fif, Thu May-11-23 02:49 AM


>
>his prompt was specifically about what was the highest rated
>set on amazon, I wasn't limiting it that way.
>

These LLMs already do more than u think. One of the most important inventions ever just got going imo. /Shrug. You can cherrypick examples but a key is the user has to know how to make them spill
13484267, huh?
Posted by Rjcc, Thu May-11-23 07:27 AM
>These LLMs already do more than u think.

they do significantly less, which is very impressive in its own way.


www.engadgethd.com - the other stuff i'm looking at
13484353, Huh?
Posted by fif, Fri May-12-23 02:18 AM
They do significantly less than you think?

Re:your "huh"
Read the screenhots you posted again. He did not mention Amazon, youre tilting at windmills, making shit up. Are you getting enough sleep?
13484358, you're right, he only mentioned star reviews
Posted by Rjcc, Fri May-12-23 07:15 AM
this does not in any way affect what I said or how bullshit the question is and the results he received.

it does seem to affect you wanting to pretend this was a reason to use AI

www.engadgethd.com - the other stuff i'm looking at
13484354, Okp needs a remindme function
Posted by fif, Fri May-12-23 02:22 AM
Can you archive this?

One of us gonna look real dumb in a little while
13484359, anyone advocating the use of LLMs for information lookups
Posted by Rjcc, Fri May-12-23 07:17 AM
is only going to look intelligent when they ask an LLM to tell them how smart they are for doing it.

because that's what an LLM does. it tells you what it thinks you want to hear.

www.engadgethd.com - the other stuff i'm looking at
13484491, Your combo of arrogance and stupidity
Posted by fif, Sat May-13-23 05:26 AM
is always fascinating. Thx, have a good weekend
13484511, did chatgpt write that for you?
Posted by Rjcc, Sun May-14-23 11:04 AM

www.engadgethd.com - the other stuff i'm looking at
13483834, Godfather of AI warns of Dangers Ahead - NYT
Posted by Hitokiri, Tue May-02-23 08:52 AM
https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html


aka the exact fucking premise of countless movies.







13484363, the thing about him is
Posted by Rjcc, Fri May-12-23 07:30 AM
dude is quite legitimately a genius and pioneer in the space


he's also repeatedly made predictions about AI and its use that are the dumbest shit anyone's ever said with their human mouth in the entirety of recorded history.

he's good at programming but he doesn't know shit about shit, except for one thing -- he's absolutely sure that any woman who previously said AI is dangerous is wrong and hysterical

www.engadgethd.com - the other stuff i'm looking at
13483935, GPT-4 breaks down Deck's verse on Triumph
Posted by fif, Wed May-03-23 05:51 PM
https://sharegpt.com/c/jQxhyON

this is a good example of how you can guide the LLM to better, more complete answers. compare the first analysis it gives...then a little needling on my part and asking for a more thorough longer analysis...and bam.

hard not to be impressed that a robot can do this
13484360, huh
Posted by Rjcc, Fri May-12-23 07:22 AM
>
"Flee with the lottery, possibly they spotted me
This line portrays Inspectah Deck as a successful yet elusive figure who has acquired wealth and fame while remaining under the radar. The mention of "the lottery" suggests that he has reaped the rewards of his talent, while "possibly they spotted me" alludes to his desire to maintain a low profile in spite of his success."

there's essentially nothing about this response (I picked a line completely at random) that is an accurate interpretation

however, the response reads like real english.

but it's just so much bullshit, there's no understanding there.

if you took this and said it to another person they'd think you were the dumbest motherfucker who has ever lived, and they'd be correct.

www.engadgethd.com - the other stuff i'm looking at
13484514, anyone thinking about using chatgpt should read what fif posted here
Posted by Rjcc, Sun May-14-23 11:06 AM
this is an example of what a "better" answer is with this thing.

it will get you fucking fired

www.engadgethd.com - the other stuff i'm looking at
13484528, Your brain is lost
Posted by fif, Mon May-15-23 05:02 AM
Trumpeting this as some great...triumph of yours. like youve made this stonecold killer point lol. It's a verse on a rap song. It slightly misinterpreted a couple lines of poetry.

You jabbed elsewhere that I use chatgpt to write. I dont, I use it to learn. You could too but first go find your brain

13484535, no, the point is that you didn't catch it.
Posted by Rjcc, Mon May-15-23 08:07 AM
and when it was caught, you got so fuckin mad you started throwing insults because you were embarrassed that your robot friend doesn't actually understand rap lyrics.


www.engadgethd.com - the other stuff i'm looking at
13484585, RE: no, the point is that you didn't catch it
Posted by fif, Mon May-15-23 02:30 PM
I read your buddy's noahopinion tweet dude. Guess what I'm not in a class writing a paper about this verse. Did I say that this was a flawless interpretation? no. It is still mindboggling that machines can do this. Lol at "line by random". Lol at your mind in general. You really ought to interact with gpt 4 more. Have you? I think I know your steez. You are a 2nd hand opinion peddler. Think for yourself.

When I have a minute I can show you more practical knowledge-seeking uses. It is far more efficient than Google/wiki etc for guiding you to the right facts. On everything? Is it definitive? No but it is revolutionary and you better get up to speed. What a weird corner of the "tech news" world u inhabit
13484614, you think autocomplete is "mindboggling"
Posted by Rjcc, Tue May-16-23 07:42 AM

www.engadgethd.com - the other stuff i'm looking at
13484529, lines 1-3
Posted by fif, Mon May-15-23 05:21 AM
this is amazing. a machine is doing this. the world is not the same. rjcc will always be the same. that ain't news.

---

"I bomb atomically, Socrates' philosophies"

Inspectah Deck compares his lyrical prowess to an atomic bomb, suggesting that his rhymes are explosive and powerful. He also references the ancient Greek philosopher Socrates, known for his deep thoughts and questioning nature. By connecting his rapping to Socrates, Deck implies that his lyrics contain profound wisdom and intellectual depth.

"And hypotheses can't define how I be droppin' these"

Deck emphasizes the uniqueness and complexity of his lyrical style, stating that no theory or speculation can accurately capture or define his approach to rapping. This line serves to elevate his status in the rap game, asserting that his talent is unparalleled and cannot be easily categorized or understood.

"Mockeries, lyrically perform armed robbery"

Here, Deck likens his rap skills to a heist, suggesting that he can outshine and steal the spotlight from other rappers with his powerful and impactful lyrics. The use of the word "mockeries" implies that he views other rappers as inferior and that he dominates the competition with his superior lyrical abilities.
13484534, you gotta be a fuckin dumbass
Posted by Rjcc, Mon May-15-23 08:06 AM
if you find that shit impressive.

www.engadgethd.com - the other stuff i'm looking at
13484584, You are playing yourself
Posted by fif, Mon May-15-23 02:25 PM
Keep going.

If you don't understand why that is impressive...well...try to think about it

13484617, I can't tell if you're completely unimpressed with the technology...
Posted by PimpTrickGangstaClik, Tue May-16-23 08:10 AM
Or if you just think it's overblown.

For the inspectah deck thing: sure it's not the most amazing interpretation of the lyrics. But the fact that you're able to tell it to interpret some lyrics and in 5 seconds it gives you an essay that would get a B+ in a high school English class is super impressive to me.

Don't even get me started on its ability to generate code in pretty much any programing language. In ten seconds it will generate something that would have taken me a few hours to figure out.



13484618, it's not about being impressed
Posted by Rjcc, Tue May-16-23 09:10 AM
it's about understanding what you're looking at.

it didn't interpret shit, because it can't.

it doesn't know what lyrics are, or what words are, or who inspectah deck is.

it can string words together to simulate what it thinks you'll interpret as a readable and enjoyable sentence.

if you acknowledge that, you can use it effectively.

if you go OH MY GOD THIS IS MY NEW RESEARCH ASSISTANT

you're going to do some dumb shit.

www.engadgethd.com - the other stuff i'm looking at
13484637, I think this right here is the main problem
Posted by Rjcc, Tue May-16-23 01:12 PM
> in 5 seconds it gives you an essay that would get a B+ in a high school English class is super impressive to me.


it didn't do that.

it copied. it's read every post on the internet, and based on what it read, it copied what someone else already said.

if you copy someone else's paper, that's not a B+. That's a zero and if you do it too many times you get expelled.

there's no intelligence there, it's just language.

if your essay is copied from someone else but you didn't do the work, as soon as the teacher asks you what does it all mean, you won't have an answer, just like this thing doesn't have an answer and it can't give you one.

and on your point about programming, the lesson is that programming isn't difficult. but we already knew that, since for decades there have been programming bootcamps that teach people to code in a matter of days or weeks.


www.engadgethd.com - the other stuff i'm looking at
13484671, here you finally said enough
Posted by fif, Tue May-16-23 04:22 PM
seems you are trying to repeat Vincent's arguments but he is also confused.


>it copied. it's read every post on the internet, and based on
>what it read, it copied what someone else already said.
>

wrong. it is able to use analyze and understand completely novel text, things that were just written. the point of me asking it to go line by line through Deck's verse was to show that it can analyze damn well from the text ITSELF. it is "reading", it is not parroting someone else's analysis of Deck's words.

https://sharegpt.com/c/wPGg6KA

>there's no intelligence there, it's just language.
>

confusion


13484676, If you don't know how it works
Posted by Rjcc, Tue May-16-23 04:59 PM
then you shouldn't be trying to get into conversations about it.

It doesn't "understand" anything.

It learns from the patterns its seen to break down text, and put together text in a way that it thinks resembles the text it was shown.

that's why it failed to analyze that line and still provided a gibberish response. because it doesn't know what any of that means.

There is no AI expert who would disagree with me, but you don't know that because you don't know shit about AI.


www.engadgethd.com - the other stuff i'm looking at
13484684, you have been anthropomorphizing it as well
Posted by fif, Tue May-16-23 05:39 PM
you say, it doesn't "understand" but it "thinks". strictly speaking, these terms don't hold, just as my saying it "reads" doesn't hold. you want to get rigorous all of a sudden trying to save face by being a pedant.

the distinction btwn language use and knowledge is a thorny matter. the chinese room experiment?

---

the use case of interpreting poetry is not a great one. it is undeniably a more efficient way for certain kinds of directed learning already. i was in conversation with a group of people with some very specific questions about smallpox...it's history, the course of symptoms, when it becomes infectious etc. people used a variety of methods for getting to the answers. gpt-4 was by far the fastest...with the right prompting...it got the user to a) the answers but also b) the title of the definitive reputable text on these smallpox questions. someone else googled, determined the most authoritative book, downloaded it, scanned the table of contents, ctl+f'd around for answers. the text had them, the process was more cumbersome. if we were writing a serious paper on the topic, of course , you want to use human-written texts. gpt-4 could quote from the text and bring in other insight not in one text. can it be trusted 100%? no? does it allow far greater speeds of self-directed learning, bring one to the best sources faster? yes. this is my current main use of gpt-4.

so i brought up the deck example to show it can go essentially from raw text to correct interpretation (with a low error rate) but this is only one thing it can do. try having it summarize emails you receive. it is very good at it.
13484687, use gpt-4 for answering animal questions
Posted by fif, Tue May-16-23 05:48 PM
a kid would ask. tell me it is useless. you want to know if sea turtles migrate together or alone? how much they weigh? how thick their shells are?

how many species of dolphin are there? how deep do they dive? why do salamanders tails fall off? how quick do they regrow? what colors can dogs see? why are cats less friendly than dogs?

or

to understand basic plumbing questions. or to get an overview of how your car works.

it allows you to wormhole down curiosities. and to keep following up. it does this very well. and then if you want to know more, more definitively, you check it. who are the top researchers in this area? what is the consensus? what is the history of thinking on this? what did people in 1800 think infection was caused by. and so on
13484689, Bing runs on GPT-4, let's ask Bing a simple question.
Posted by Rjcc, Tue May-16-23 06:10 PM
https://twitter.com/tomwarren/status/1658577433954533376


www.engadgethd.com - the other stuff i'm looking at
13484691, cherrypicking flaws is all well and good
Posted by fif, Tue May-16-23 06:15 PM
but to act like that tells the whole story or makes them useless as a whole is willful blindness at this point:

here are those animal questions + some...speedily answered. fact check them.
https://sharegpt.com/c/HDrbkn1
13484692, how is that cherrypicking
Posted by Rjcc, Tue May-16-23 06:20 PM
meanwhile, you ask it question that can easily be answered from its training data

but that's not cherry-picking.

www.engadgethd.com - the other stuff i'm looking at
13484694, if you don't get it
Posted by fif, Tue May-16-23 06:34 PM
you don't get it.

let's check back on this thread in 5 years...and see how you did.

it is a very efficient way of pulling together information. it is imperfect but far superior to previous modes in many ways. it is already saving many people i know a ton of time at work and in other pursuits with no dropoff in performance noted. do you expect it to be a flash in the pan?

using it to "write" for you is not something i'm comfortable with. it a) does make errors and b) could de-skill people. it is important, as with any technology, to use it as an adjunct to our lives & minds, not as a replacement.
13484701, I asked how is that cherrypicking
Posted by Rjcc, Tue May-16-23 08:14 PM

www.engadgethd.com - the other stuff i'm looking at
13484680, hmm, how do large language models work?
Posted by Rjcc, Tue May-16-23 05:05 PM
"In fact, their objective function is a probability distribution over word sequences (or token sequences) that allows them to predict what the next word is in a sequence (more details on this below)."


that's from one of your links above, you should go read it and not be dumb.

https://www.assemblyai.com/blog/how-chatgpt-actually-works/#:~:text=In%20fact%2C%20their%20objective%20function%20is%20a%20probability%20distribution%20over%20word%20sequences%20(or%20token%20sequences)%20that%20allows%20them%20to%20predict%20what%20the%20next%20word%20is%20in%20a%20sequence%20(more%20details%20on%20this%20below).

www.engadgethd.com - the other stuff i'm looking at
13484075, It's great for creative questions or things that you can validate.
Posted by Nopayne, Sat May-06-23 12:10 PM
If you're using it to research a topic that you know nothing about then you're going to have a rough time.
13484264, Wolfram on lex fridman
Posted by fif, Thu May-11-23 02:36 AM
https://youtu.be/PdE-waSx-d8

13484268, if you use time in your real actual life listening to lex fridman
Posted by Rjcc, Thu May-11-23 07:27 AM
you'd be better off just asking chatgpt

www.engadgethd.com - the other stuff i'm looking at
13484352, Lol
Posted by fif, Fri May-12-23 02:07 AM
You are a strange bird. I am not a Lex Fridman stan but do you know who Stephen Wolfram is? Is he up there with Sam Altman on your dumbest motherfuckers ever list?

This was Wolfram's 4th time on Fridman's show. He must be even dumber than Altman for continuing to go back on!

Do you realize how dumb YOU sound? What kind of bubble are you living in?

13484357, Yes I know who Stephen Wolfram is.
Posted by Rjcc, Fri May-12-23 07:14 AM
I don't know why you think I should give a shit who white nerds enjoy pandering to them.

>What kind of bubble are you living in?

how am I living in a bubble? I've spoken to Altman and Fridman.

That's why I know they're dumb as shit and anyone who listens to them is a moron


www.engadgethd.com - the other stuff i'm looking at
13484492, Name a smart person
Posted by fif, Sat May-13-23 05:53 AM
Your colleague James Vincent has greater insights about AI than Sam Altman? You are living in an alternate universe.

>I don't know why you think I should give a shit who white
>nerds enjoy pandering to them.
>

Lol the OP asked for sources for getting up to speed on llms.
Hmmm..wolfram or your mid ass coworker?

>>What kind of bubble are you living in?
>
>how am I living in a bubble? I've spoken to Altman and
>Fridman.
>

Oh yea? About what?
You are bubbled up in some weird closed minded condemnation mode all the time. You are a hater who says nothing of value yet throws stones all day every day. What is your critique of Altman and fridman. In a couple sentences explain why they are such dumb dumbs. Prove how sharp you are in comparison! You're a latent genius, rjcc! First step to helping the world with that brain of yours is achieving something approaching a coherent thought. U got this


>That's why I know they're dumb as shit and anyone who listens
>to them is a moron
>
>
>www.engadgethd.com - the other stuff i'm looking at
13484515, RE: Name a smart person
Posted by Rjcc, Sun May-14-23 11:10 AM
>Your colleague James Vincent has greater insights about AI
>than Sam Altman? You are living in an alternate universe.



my dog has better insights about literally anything than Sam Altman and it's been dead for ten years. dude's a dumbass. I don't present my colleague as more of an expert than him, I think literally anyone with a semifunctional brain is better.


>
>>I don't know why you think I should give a shit who white
>>nerds enjoy pandering to them.
>>
>
>Lol the OP asked for sources for getting up to speed on llms.
>
>Hmmm..wolfram or your mid ass coworker?

probably not the person who didn't build the llms we're discussing


>
>>>What kind of bubble are you living in?
>>
>>how am I living in a bubble? I've spoken to Altman and
>>Fridman.
>>
>
>Oh yea? About what?
>You are bubbled up in some weird closed minded condemnation
>mode all the time. You are a hater who says nothing of value
>yet throws stones all day every day. What is your critique of
>Altman and fridman. In a couple sentences explain why they are
>such dumb dumbs. Prove how sharp you are in comparison!
>You're a latent genius, rjcc! First step to helping the world
>with that brain of yours is achieving something approaching a
>coherent thought. U got this

who said I'm smart in comparison? anyone is smart in comparison to sam altman. there's nothing to justify the idea that sam altman is smart other than he's very rich.

I said anyone who listens to fridman is dumb. There's a certain minimum level of intelligence required to pander to dummies effectively, and I think fridman has it.

>
>
>>That's why I know they're dumb as shit and anyone who
>listens
>>to them is a moron
>>
>>
>>www.engadgethd.com - the other stuff i'm looking at
>


www.engadgethd.com - the other stuff i'm looking at
13484516, thank you so much for skipping over your VERY IMPRESSIVE inspectah deck
Posted by Rjcc, Sun May-14-23 11:13 AM
breakdown

it's hard for me to make it obvious to anyone that what you're saying is puffed up bullshit and overconfidence because you don't know anything about anything and you've listened to a bunch of podcasts telling you that you're a Very Good AI Prompter.

but you putting up an example of the thing spitting out absolute gibberish, and failing to actually read through it to catch that before it's pointed out

is precisely what I'm talking about

and very valuable for this discussion

www.engadgethd.com - the other stuff i'm looking at
13484598, you are projecting
Posted by fif, Mon May-15-23 04:32 PM
your insecurities on me. you are very bad at explaining your reasons for believing things. yet you constantly think you are shooting truth rays here on okp. you have problems.

if we took a panel of 100 professors from various fields at well-regarded schools (or whatever agreed upon standard of 'highly intelligent people good at evaluating arguments') and had them analyze 10 different rjcc reply guy okp threads...do you think they would come down on your side? it is bizarre: 1) how confident you are in your beliefs 2) how bad you are at explaining them.

it seems that you rank people based on some strange mood/political affiliation. you have a very hard time engaging with any of the relevant aspects of an argument. poor argumentative tactics, all manner of fallacies. you should consider taking online courses on epistemology or something. google how to argue better. or argue with gpt-4! you gotta tighten up somehow.


>breakdown
>
>it's hard for me to make it obvious to anyone that what you're
>saying is puffed up bullshit and overconfidence because you
>don't know anything about anything and you've listened to a
>bunch of podcasts telling you that you're a Very Good AI
>Prompter.

you seem to know a lot about me. yah yah yah.

i can try to explain at length why llms are world-changers but with you it's a pearls before swine. why take the time? i am actually fascinated by how you think and i am concerned to know how prevalent your pathologies might be in others. i think your way of thinking is damaging politics in america. are you one who has sublimated some idea of Christian sin into other areas and lashes out like a maniac condemning condemning condemning? this, as we can see from your contributions on okp, is not a recipe for clear-headed, useful thinking. so i hope you can change. i feel bad for you. but you bring this place down and drive away discussion. you are obsessed with various 'thought crimes' and don't realize how narrow your worldview has become. eg you play a terrible game of considering a person who reads another person to hold all of their views, shares their 'sins'. better to take an a la carte approach when forming beliefs...take what you find is useful from many places and determine for yourself what is true. maybe study probabilistic reasoning and some basic epistemology? high level of angry vitriol + the low level of argument is your bread and butter. i cant imagine how you are irl. i wish you well, this is a waste of my time, though. i think you are very wrong-headed in this thread, but im not sure exactly what you're claiming.


if you want to form a coherent critique of llms, im down to discuss how our views may differ or overlap. in a sentence or three what dont they do that you seem to think I (and others) think they do? again, i get the impression that you havent used them very much.

the same goes for...dismissing Altman's intelligence. and scoffing at the idea that there could be any value in listening to Wolfram in conversation for 4 hours...simply because the host is Fridman. flesh out your angry claims here. what do you mean? how do you know Altman is a dumb motherfucker. you talked to him? about what? what did you talk to Fridman about? show us what you mean, please.



13484613, Yes, I have talked to Sam Altman and that's how I know he's an idiot
Posted by Rjcc, Tue May-16-23 07:42 AM
I don't have to say shit about shit.

I can just let you keep talking and making my points for me.

www.engadgethd.com - the other stuff i'm looking at
13484672, ^clownshow
Posted by fif, Tue May-16-23 04:25 PM


>RE: Yes, I have talked to Sam Altman and that's how I know he's an idiot
>I don't have to say shit about shit.
>
>I can just let you keep talking and making my points for me.
>
>www.engadgethd.com - the other stuff i'm looking at
13484678, oh don't stop talking now keep digging
Posted by Rjcc, Tue May-16-23 05:00 PM

www.engadgethd.com - the other stuff i'm looking at
13484625, They're already obviously changing the world.
Posted by Triptych, Tue May-16-23 10:22 AM
Dunno what rjcc's on, but we do need to be actually talking about it.
13484636, show me where I said you can't talk about it or shouldn't
Posted by Rjcc, Tue May-16-23 01:07 PM

I said you shouldn't take advice from well known racists since they spread misinformation.

I'm missing how this is confusing


www.engadgethd.com - the other stuff i'm looking at
13484673, save grace in this thread
Posted by fif, Tue May-16-23 04:26 PM
by making it about race. good luck, rjcc.
13484675, lol. how is stating a fact making it about race?
Posted by Rjcc, Tue May-16-23 04:57 PM
I'm sure you don't want to discuss racism, I see who you look up to.

But racism is not race.

www.engadgethd.com - the other stuff i'm looking at
13484674, yup, IBM's job announcement?
Posted by fif, Tue May-16-23 04:36 PM
Plans to hire 7,800 frozen because of AI. But rjcc knows more than IBM. he talked to Altman but won't tell us what he said.

he views everything through a very narrow, very distorting political lens, doesn't engage with facts, just knows that x people are good and y people are bad, dumb, etc.

i hope no one believes a thing he writes in here on AI, because i think it is in everyone's interests to get schooled up on these things. AI is ALREADY transforming the job market...people have to think about how everything that is coming is going to affect them. being reasonable, trying to understand calmly is the way. rjcc is spreading misinformation that is only going to confuse people
13484677, what do you mean I won't tell you what altman said?
Posted by Rjcc, Tue May-16-23 05:00 PM
you gotta stop smoking crack


www.engadgethd.com - the other stuff i'm looking at
13484696, you're something else
Posted by fif, Tue May-16-23 06:56 PM

fif:
"how do you know Altman is a dumb motherfucker. you talked to him? about what? what did you talk to Fridman about? show us what you mean, please."

rjcc:
"Yes, I have talked to Sam Altman and that's how I know he's an idiot. I don't have to say shit about shit. I can just let you keep talking and making my points for me.

fif:
"...he talked to Altman but won't tell us what he said."

rjcc:
"what do you mean I won't tell you what altman said?
you gotta stop smoking crack"

13484702, what's unclear there?
Posted by Rjcc, Tue May-16-23 08:15 PM

www.engadgethd.com - the other stuff i'm looking at
13484679, using IBM as an example is the funniest shit I've ever seen
Posted by Rjcc, Tue May-16-23 05:03 PM
because I'm not saying AI will change things or won't

but I am saying that IBM has been wrong about basically every tech development for the last 30 years or so, and through catastrophic mismanagement, has had mass layoffs constantly throughout that period.

that is the worst justification you can use for anything.


www.engadgethd.com - the other stuff i'm looking at
13484695, dude u write a tech blog
Posted by fif, Tue May-16-23 06:47 PM
tech layoffs and freezes are all over the place. there was a ton of bloat already and now here comes a massive productivity accelerant to the mix. less labor hours are already needed to do the same work in many areas: i know someone who can do all their work that used to take 4hrs (8hr work day) in less than an hour. so keep on keeping on.

you are in on some secret knowledge that a lot of people way smarter don't have, i wonder what it could be.

you are a living embodiment of the dunning-kruger effect
13484703, IBM pissing away the last 30 years isn't secret tech knowledge!
Posted by Rjcc, Tue May-16-23 08:16 PM
that's not even arguable

it is accepted knowledge

www.engadgethd.com - the other stuff i'm looking at
13484706, IBM was up in 96, and it's been down since then bruh
Posted by Rjcc, Tue May-16-23 09:57 PM

https://sloanreview.mit.edu/article/the-decline-and-rise-of-ibm/


https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html
https://www.zdnet.com/article/what-went-wrong-at-ibm-its-master-plan-has-failed-to-deliver/
https://www.protocol.com/enterprise/ibm-lost-public-cloud
https://www.computerworld.com/article/2471825/5-reasons-why-ibm-is-full-of-fail.html
https://www.nytimes.com/2021/07/16/technology/what-happened-ibm-watson.html

https://247wallst.com/technology-3/2023/01/26/ibm-is-techs-most-beaten-down-company/
"In 1980, International Business Machines Corp. (NYSE: IBM) was the eighth largest corporation in the country, according to the Fortune 500. It was bigger than General Electric. No other tech company was in the top 30 companies on the list. In the current list, IBM ranks 49th. It is miles behind Alphabet, Amazon, Apple and a small army of other tech companies. By any measure, IBM is big tech’s largest failure."

https://www.digitaljournal.com/business/does-not-compete-the-decline-of-ibm/article

https://www.inc.com/walter-simson/what-you-can-learn-from-ibms-massive-turnaround-failure.html


"How IBM misjudged the PC revolution"
http://news.bbc.co.uk/2/hi/business/4336253.stm


www.engadgethd.com - the other stuff i'm looking at
13484493, Patrick Collison interviews Sam Altman
Posted by fif, Sat May-13-23 06:35 AM
https://youtu.be/1egAKCKPKCk (53min)
5/10/23

13484512, huh. after I point out that your handpicked chatgpt example
Posted by Rjcc, Sun May-14-23 11:05 AM
shows exactly how much bullshit the thing is capable of, you don't respond to that but start making personal insults instead.
interesting.


www.engadgethd.com - the other stuff i'm looking at
13484586, RE: huh. after I point out that your handpicked chatgpt example
Posted by fif, Mon May-15-23 02:36 PM
Can we get your opinion on Patrick Collison's intelligence? Can you explain why he would sit down with Altman for an hour? Is collison also a dumb motherfucker.

We need a list of people you respect.
13484615, idk if you've ever heard of these things called fallacies
Posted by Rjcc, Tue May-16-23 07:43 AM
you should ask chatgpt about them

www.engadgethd.com - the other stuff i'm looking at
13485556, AI has the wrong name
Posted by Rjcc, Sun Jun-04-23 05:13 PM
https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84


"Chiang’s main objection, a writerly one, is with the words we choose to describe all this. Anthropomorphic language such as “learn”, “understand”, “know” and personal pronouns such as “I” that AI engineers and journalists project on to chatbots such as ChatGPT create an illusion. This hasty shorthand pushes all of us, he says — even those intimately familiar with how these systems work — towards seeing sparks of sentience in AI tools, where there are none.

“There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954’,” he says. “And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.”

So if he had to invent a term, what would it be? His answer is instant: applied statistics.

“It’s genuinely amazing that . . . these sorts of things can be extracted from a statistical analysis of a large body of text,” he says. But, in his view, that doesn’t make the tools intelligent. Applied statistics is a far more precise descriptor, “but no one wants to use that term, because it’s not as sexy”."


www.engadgethd.com - the other stuff i'm looking at
13486050, But actual sentience is not a really a requirement for AI / AGI.
Posted by Triptych, Fri Jun-09-23 02:26 PM
Definitely not in the modern sense.

And even historically it was always assumed that AI would arise through computation / mathematics.

Which is pretty much deterministic and statistical at it's core.

So.... Yes of course there are statistics involved.

Or how about our best understanding of the entire universe, including every atom in each of our bodies, is best explained by the Standard Model, which is pretty much very fancy applied statistics.

Requiring something that can't be modeled by applied statistics kinda means we're in the realm of the metaphysical. Even then, plenty of metaphysical systems consider that strong sense of I-ness to be an illusion.

The question he should be asking is how much of our human language-related cognition can be viewed as applied statistics. How much of what we say and think is predictable within some error margin. How robotic are we?

Or, whether sentience is a requirement for intelligence. Plant life for instance, or mushrooms, show communication, adaptation, path finding, etc with no measureable sense of I-ness.

13486116, I have no idea what this is a response to
Posted by Rjcc, Fri Jun-09-23 07:34 PM
"The question he should be asking is how much of our human language-related cognition can be viewed as applied statistics. How much of what we say and think is predictable within some error margin. How robotic are we?"

this is a dumb fuckin question?

if you slap someone in the face they'll probably say ow, it's very predictable, does that make them robotic or human?


a plant is more intelligent than a LLM because a plant has never told anyone that the world french fries has 17 letter Zs in it.

www.engadgethd.com - the other stuff i'm looking at
13486128, Define your terms
Posted by fif, Sat Jun-10-23 12:11 AM
Did Aristotle lack intelligence because he thought the sun, moon and stars revolved around the earth? An entity can possess "intelligence" without being infallible, right?

How do you define intelligence?
13486153, I just said a plant has intelligence
Posted by Rjcc, Sat Jun-10-23 04:42 PM
I don't know how you take from that, that I'm somehow gatekeeping

if you want me to call it intelligent, then I'm going to have to point out that it's the dumbest piece of shit version of intelligence that has ever existed.

if I call it what it is, a spreadsheet with some spice, then we can talk about how incredibly capable it is with that.


www.engadgethd.com - the other stuff i'm looking at
13486168, We need to know how you define intelligence
Posted by fif, Sun Jun-11-23 01:36 AM
Otherwise this is pointless.

Do you believe all 3 of these sentences are true?
Plants are intelligent. Humans are intelligent. No current AI/LLM is intelligent.

Do you think it is possible for a "machine" to be intelligent? If yes, how would you know this has happened? If no, why not? Can only living things have intelligence?

You hold that both plants and people are 'intelligent'. What is the common thread they share? What can they do that things without intelligence can't? Self-awareness? Goal formation? Goal-directed behavior?

"Dumbest piece of shit version of intelligence". Well, is slime mold intelligent? Prokaryotes? Tell us where and how you draw the lines.

A spreadsheet with spice? Hmm. Odd way of seeing it
13486240, you can call it intelligent if you want, I can't stop you.
Posted by Rjcc, Mon Jun-12-23 11:27 AM
but if you do then I have to point out how dumb it is.

I think that's a bad way to have the conversation.

I think the definition of intelligence can be broad enough to cover an Intel 486 CPU. It's literally a rock that thinks, why would I say an LLM isn't?

www.engadgethd.com - the other stuff i'm looking at
13486118, I feel like you filled something in I didn't say.
Posted by Rjcc, Fri Jun-09-23 07:38 PM
I wasn't talking about AGI, because you'd have to be a fucking dumbass to look at the generative AI tools that exist and say anything about AGI.

there isn't a question of whether they have generalized intelligence, because they don't have non-generalized intelligence, or any intelligence at all.

and if you say they do, you can't define it in any way that excludes a graphing calculator from qualifying as AI which should be a hint that you're making a silly argument.

www.engadgethd.com - the other stuff i'm looking at
13486143, It can basically do your job already so ...
Posted by Triptych, Sat Jun-10-23 11:28 AM
13486152, if you think that, then you don't understand what my job is.
Posted by Rjcc, Sat Jun-10-23 04:40 PM

www.engadgethd.com - the other stuff i'm looking at
13486154, the weird part about this response
Posted by Rjcc, Sat Jun-10-23 04:43 PM
is that the job it's best at is the one you do.


www.engadgethd.com - the other stuff i'm looking at
13486164, wasn't even true when I was a coder lol
Posted by Triptych, Sat Jun-10-23 09:15 PM
But yeah ChatGPT can definitely blog it's ass off.
13486165, I don't blog, and it's bad at blogging
Posted by Rjcc, Sat Jun-10-23 10:01 PM
and even when I did blog, that's still not what my actual job is.

also, you're wrong about it being best at what you do

www.engadgethd.com - the other stuff i'm looking at
13486167, yeah i guess we'll see
Posted by Triptych, Sat Jun-10-23 10:36 PM
13486241, you being wrong about what my job is isn't something we have to wait on
Posted by Rjcc, Mon Jun-12-23 11:27 AM

www.engadgethd.com - the other stuff i'm looking at
13486242, I know online publishing pretty well, as well as their thoughts on AI
Posted by Triptych, Mon Jun-12-23 11:51 AM
but i get it. you're an editor congrats.

Pretending AI is useless is a risk to your industry.
13486426, where did I say AI is entirely useless?
Posted by Rjcc, Wed Jun-14-23 09:39 AM
and since you're not going to answer that question, why don't you answer this one

why are you mad that you think I don't like AI?


www.engadgethd.com - the other stuff i'm looking at
13486432, also, editing isn't the important part of what I do
Posted by Rjcc, Wed Jun-14-23 09:45 AM
but you don't know anything about the industry if you think it is.



www.engadgethd.com - the other stuff i'm looking at
13485987, if you're wondering why I say Sam Altman is a dumbass
Posted by Rjcc, Fri Jun-09-23 09:19 AM

try listening to Sam talk for one minute

https://twitter.com/bilawalsidhu/status/1666968372976730113

"Q: "After doing AI for so long, what have you learned about humans?"
Sam Altman: "I grew up implicitly thinking that intelligence was this, like really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter"

A FUNDAMENTAL PROPERTY OF MATTER

that's the dumbest fuckin thing anyone has ever said about anything, ever, in the history of existence.

www.engadgethd.com - the other stuff i'm looking at
13486054, How much do you know about matter?
Posted by Triptych, Fri Jun-09-23 02:35 PM
.
13486117, more than Sam Altman does
Posted by Rjcc, Fri Jun-09-23 07:35 PM
a person who, by his own admission, thought intelligence was uniquely human.

have you ever met an animal? Sam Altman hadn't, because he's the dumbest mf ever

www.engadgethd.com - the other stuff i'm looking at
13486261, Man I had hopes for RJCC in this one
Posted by Buddy_Gilapagos, Mon Jun-12-23 04:16 PM
Was hoping with a subject he is interested in and has expertise, he could show he can interact with people like a normie of his age.


**********
"Everyone has a plan until you punch them in the face. Then they don't have a plan anymore." (c) Mike Tyson

"what's a leader if he isn't reluctant"
13486310, So painful watching engineers talk to tech journalists.
Posted by Nopayne, Tue Jun-13-23 11:28 AM
13486429, if people want to lie and say that IBM historically makes good decisions
Posted by Rjcc, Wed Jun-14-23 09:42 AM
I can't do anything about that

they haven't been correct since they helped the nazis

www.engadgethd.com - the other stuff i'm looking at
13486315, :(
Posted by Mynoriti, Tue Jun-13-23 12:37 PM
>Was hoping with a subject he is interested in and has
>expertise, he could show he can interact with people like a
>normie of his age.
>
13486406, Expertise is a strong word
Posted by Triptych, Tue Jun-13-23 10:11 PM
13486428, sorry I have criticism for something you have money tied up in
Posted by Rjcc, Wed Jun-14-23 09:41 AM
let's get it all on the table, that's what this is about.

www.engadgethd.com - the other stuff i'm looking at
13486437, lol so go out and short some AI stocks.
Posted by Triptych, Wed Jun-14-23 10:01 AM
.
13486439, that you don't understand that having a financial incentive
Posted by Rjcc, Wed Jun-14-23 10:24 AM
would be bad

is part of you not understanding what my job actually is.

let's be clear -- you have a very direct financial incentive in this conversation.

I do not. someone using chatgpt to write their intraoffice email isn't an issue for me.

www.engadgethd.com - the other stuff i'm looking at
13486534, Makes sense if I believe convincing you affects stock prices.
Posted by Triptych, Thu Jun-15-23 02:41 AM
.
13486540, who said anything about the stock price?
Posted by Rjcc, Thu Jun-15-23 07:58 AM
this is just ego and emotion, that stems from the previous financial decisions

you can't have been even partially incorrect about your assessment.

That's why you don't want to answer any questions about it, they'd take you down a bad path.

www.engadgethd.com - the other stuff i'm looking at
13486548, Guess I'll recuse myself from the S&P 500 too. Damn index funds
Posted by Triptych, Thu Jun-15-23 08:45 AM
13486577, I think just not stanning Sam Altman's intelligence is enough
Posted by Rjcc, Thu Jun-15-23 10:46 AM
mans has a billion bucks and an AI chatbot army he can fight his own battles

www.engadgethd.com - the other stuff i'm looking at
13486541, also...what I think does affect stock prices
Posted by Rjcc, Thu Jun-15-23 07:59 AM
but that's a whole separate discussion that isn't relevant here.

www.engadgethd.com - the other stuff i'm looking at
13486657, 👍
Posted by Triptych, Fri Jun-16-23 09:13 AM
13486440, also, I've been long Nvidia since 1997, so theoretically
Posted by Rjcc, Wed Jun-14-23 10:26 AM
I'm up roughly a trillion bucks thanks to AI

www.engadgethd.com - the other stuff i'm looking at
13486427, I think you should write up all of your work using ChatGPT all the time
Posted by Rjcc, Wed Jun-14-23 09:40 AM
that is my advice to you

oh, and you should listen to Sam Altman and buy lots of his worldcoin and let him scan your eyeball.

I want you to do that please, thank you.

www.engadgethd.com - the other stuff i'm looking at
13486545, man, these AI haters won't stop talking about how dumb AI is
Posted by Rjcc, Thu Jun-15-23 08:28 AM
"Artificial intelligence is not yet as smart as a dog, Meta A.I. chief says"



"At the same panel, Yann LeCun, chief AI scientist at Facebook parent Meta, was asked about the current limitations of AI. He focused on generative AI trained on large language models, saying they are not very intelligent, because they are solely coached on language.

“Those systems are still very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely trained on text, massive amount of text,” LeCun said.

“Most of human knowledge has nothing to do with language … so that part of the human experience is not captured by AI.”

LeCun added that an AI system could now pass the Bar in the U.S., an examination required for someone to become an attorney. However, he said AI can’t load a dishwasher, which a 10-year old could “learn in 10 minutes.”

“What it tells you we are missing something really big … to reach not just human level intelligence, but even dog intelligence,” LeCun concluded."


"In another example of current AI limitations, he said a five-month-old baby would look at an object floating and not think too much of it. However, a nine-month year old baby would look at this item and be surprised, as it realizes that an object shouldn’t float.

LeCun said we have “no idea how to reproduce this capacity with machines today. Until we can do this, we are not going to have human-level intelligence, we are not going to have dog level or cat level .”

https://www.cnbc.com/2023/06/15/ai-is-not-even-at-dog-level-intelligence-yet-meta-ai-chief.html


you don't have to take my word for it, you can literally just read what the actual experts say.



www.engadgethd.com - the other stuff i'm looking at
13486638, RE: man, these AI haters won't stop talking about how dumb AI is
Posted by fif, Fri Jun-16-23 02:46 AM
Here's LeCun saying something more interesting:

https://pasteboard.co/u6ggCHlIoNG2.jpg (ai-transcription of LeCun talking)

Reverse engineer the free energy principle?

The llms made a huge leap but no one is calling them humans or dogs. They are a new category. Humans used to be the only ones that can do do what they do with language. Is it the same? Worse? Better? Those aren't very useful categories of description here. Better to think in terms of capabilities and limitations. What can they do? What can't they do? These things are new under the sun. No one really knows what these things are and where they might be going. But they aren't going away. Right now they don't share a phenomenological world with us. But video exists. If they can grab VISUAL "vocab", "syntax", etc like they did for language? Who knows. Some people talk about the llms as a piece of the anatomy of a future, more "in the world" ai. Possibly. I don't know! But it's pretty fucking interesting.

If you don't find them interesting, that's on you. I'd like to know why. Odd behavior for someone in your field at this point.
13486644, #1 LLMs exhibit more intelligence than you do
Posted by Rjcc, Fri Jun-16-23 07:42 AM
#2 who said I don't find them interesting?

I find them super interesting, and I find the human beings who can't tell the difference between garbage text output by a machine and actual intelligence even more interesting.

you ever read the entire court transcript of the dumbass lawyer who submitted a brief written by chatgpt that made up multiple case citations? If you haven't, then I guess you just don't think AI is interesting

www.engadgethd.com - the other stuff i'm looking at
13486725, RE: #1 LLMs exhibit more intelligence than you do
Posted by fif, Fri Jun-16-23 09:11 PM
>#2 who said I don't find them interesting?
>


>I find them super interesting, and I find the human beings who
>can't tell the difference between garbage text output by a
>machine and actual intelligence even more interesting.

a) you refuse to define what you mean by "actual intelligence" vs "garbage text output"

b) up and down this thread you're straw-manning people's positions. you seem to feel you have to warn us all not to be like the idiot lawyer below. have some faith. people will always do dumb shit, this technology is new and rapidly evolving, its strengths and weaknesses are still tbd. too early to start making rules, putting people in boxes like chief know-it-all, gotta keep the spirit of inquiry alive.

in this thread, it is very difficult to understand what you actually believe/think. you're not helping anyone's understanding along. you have very strong feelings, but so far you're not expressing your reasoning well. you imply a lot but make your case very little.

i am still curious: how much have you interacted with gpt-4? have any of its outputs impressed you? it does some very novel things for a machine, no?


>you ever read the entire court transcript of the dumbass
>lawyer who submitted a brief written by chatgpt that made up
>multiple case citations?

this is human folly, a rando lawyer cutting corners. not much to do with what the machines can or can't do. the intersection of other people and any technology is interesting, but i think needs to be put aside. one on one, face to face, words to words with the machine is where the understanding of oh shit the world aint the same happens.


>If you haven't, then I guess you just
>don't think AI is interesting

this is a non-sequitur
13486726, if you are interested in the negative
Posted by fif, Fri Jun-16-23 09:24 PM
waves into the human world that may be heading our way...Zak Stein has some interesting ideas here.

as for your Altman hate, etc...there is some reason for optimism that the tech will not end up in centralized big corpo hands. the recipes aint that hard. Ben Goertzel is a brilliant long-time AI hand (who has a much better grasp on cognitive science than someone like Altman)...you might be interested in his ideas.

this is a big space of ideas. i am concerned that some of the "killer apps" (like human-conversational-paced human speech-to-text input --> ai text-to-speech output...will require heaps of expensive compute ($$$$)...entrenching the big players (and their mind-steering ways) deeper into our souls...but too soon to tell
13486787, lol @ "altman hate" I said the mf is dumb because he is
Posted by Rjcc, Mon Jun-19-23 07:13 AM
that's not hate it's a reflection of the facts as they are.

www.engadgethd.com - the other stuff i'm looking at
13486786, cool, so you don't think AI is interesting.
Posted by Rjcc, Mon Jun-19-23 07:12 AM
tf you mean I haven't defined actual intelligence? I said LLMs have it, and they're smarter than you are. how is that not clear?

www.engadgethd.com - the other stuff i'm looking at
13494718, Sam Altman got fuckin fired today, btw
Posted by Rjcc, Fri Nov-17-23 03:49 PM
https://openai.com/blog/openai-announces-leadership-transition

idk, maybe you should trust him?

www.engadgethd.com - the other stuff i'm looking at
13494747, Reports say Sutskever may have orchestrated
Posted by fif, Sat Nov-18-23 10:43 AM
The ousting. Very interesting. Altman's tweet claiming AGI had been achieved then saying haha it was a joke rankled a lot of feathers in the alignment world. Sounds like Altman/Brockmam may have been pushing to go go go with $ signs in his eyes and Sutskever and others did this to reconnect the brake pedal.

With Google's Gemini on the horizon, OAI's biz end seemed to be pushing to roll out features to lock in customers.

--

But your "point" still makes no sense. You're counseling people to put their fingers in their ears. Altman has firsthand knowledge of the inner workings of the company that created most powerful AI to ever exist. Obviously, this is someone worth listening to.

Trust is something else. You don't need to want to go bowling with someone for there to be value in listening to them.

--

Dario Amodei at Anthropic...should we listen to him? Or no cuz SBF and Caroline invested?

https://youtu.be/Nlkk3glap_U?si=FGCeoqgj7ZukGBuZ

--

Whose views here DO you value? Gary Marcus and other skeptics only?

13494764, I have never said this
Posted by Rjcc, Sat Nov-18-23 07:57 PM
"You're counseling people to put their fingers in their ears."


I said he's a fuckin dumbass

idc what company he runs, he's still an idiot.

you don't learn efficiently by listening to idiots with a financial interest in what they're selling.

there are other people who can tell you what's actually important better than he can.

also, maybe he's back!



www.engadgethd.com - the other stuff i'm looking at
13494766, Hm ok, sure
Posted by fif, Sun Nov-19-23 12:14 AM
Rjcc:
"my dog has better insights about literally anything than Sam Altman and it's been dead for ten years. dude's a dumbass. I don't present my colleague as more of an expert than him, I think literally anyone with a semifunctional brain is better."

--
>"You're counseling people to put their fingers in their
>ears."
>
>
>I said he's a fuckin dumbass
>
>idc what company he runs, he's still an idiot.

Rjcc in this thread:
"how am I living in a bubble? I've spoken to Altman and Fridman.

That's why I know they're dumb as shit and anyone who listens to them is a moron"

--

>you don't learn efficiently by listening to idiots with a
>financial interest in what they're selling.
>
>there are other people who can tell you what's actually
>important better than he can.

So never listen to the statements of CEO's about their company? Odd strategy. You'd think they'd have access to information that is not public...about the company. Seems particularly valuable when the product being sold is a brandass new technology that is rapidly being implemented in a bajillion ways by a bajillion people. Wouldn't the CEO talking about the company's internal workings/discussions potentially be very useful info here? Guess not.

Who would be better to listen to about OpenAI in the last year. Ilya Sutskever is one. He's doing the cooking, Altman's role seemed more deciding which food to serve and how.

Unclear is how important are figures like Sutskever to continued refinement/further training etc. Now that gpt-4 has been made, is Sutskever, et al computer wizardry replicable by others. His recipe out of the bag?

Who are the OAI employees threatening resignation? HR? Middle management? Or the rarefied talent that made it happen.

I've heard openai's technique/skill is top of the league. Not a matter of just adding scale. But how much does one brilliant developer matter now? I have no idea.

Only have read article by some site called the verge on the begging Sammy back. It mentions "senior researchers" threatening to leave. But how indispensable are they really?

If it is really a matter of getting Sutskever and a few others..I'd worry bout getting kidnapped if I were them.

Claude 2 aint gpt-4 level for the most part. But it's damn good, actually seems better in some ways (more naturalistic prose style, for one). And Anthropic's team is tiny. Seems possible Meta just didn't have the people to make theirs compete. Next round does everyone level up? How hard is it to get to November 2023 version of gpt-4. A bunch of money, compute + very good, very competent engineers to copy recipes? I don't know. Gemini will give us an answer of sorts. If Gemini is delayed or not up to par (and really given goog's resources it should be a step past gpt4...then maybe Sutskever and his hand picked essential coterie got all the leverage in the world. I am going to guess Gemini will be right around gpt-4 for "smarts"...will be very interested in what size context window it can manage with perfect verbatim recall. That's the big hangup right now for a lot.

The new gpt-4 web browsing hangs the app and is mostly not what I want it to do when it does it--"browsing the web" interminably. "Don't browse the web unless explicitly ask" i find myself inputting, miffed a mite at the machine.

But rumors say Google may have figured out how to "update" the "real" deep training with new data (current events, other things left out first go around) so that the LLM can give native answers to queries that spark "browse the web" in today's gpt-4/bing.

We will see. Doesn't make sense in principle based on my understanding of how the training works--tacking on. But if it truly works at 100% fidelity (same ability to output the new data as data included from the jump of the long training)....then thatd be a game changer.

Some thoughts and questions I've got.


>
>also, maybe he's back!
>

What a bizarre twist. This maybe what happens when the people who really know how to brew the sauce have zero biz acumen. Haven't read up on this much yet. The doomers getting raked rn for being bad capitalists. Satya aghast at a biz partner own-goaling. But telling perhaps that some of the brightest most intimately acquainted with the machines...are some of the staunchest doomers. But the actual crux of Sutskever's beef (if true) with Samuel Alt, we don't know
13494791, bro, I'm going to need you to have chatgpt summarize that
Posted by Rjcc, Sun Nov-19-23 07:04 PM

www.engadgethd.com - the other stuff i'm looking at
13494748, Neel Nanda, mechanistic interpretibility
Posted by fif, Sat Nov-18-23 11:11 AM
https://youtu.be/_Ygf0GnlwmY?si=8iJvZjbh828GRilO

This shit flies way over my head but something... beautiful...about listening to a guy who finds all this math etc..."beautiful"..."gorgeous".

Mechanistic interpretibility is basically a field where engineers/mathematicians are trying to reverse engineer how the LLMs do what they do to understand them better. We can't trace back their steps rn, these people are trying.



13494767, Interesting... thanks for the link
Posted by Triptych, Sun Nov-19-23 02:54 AM
.
13494770, RE: Interesting... thanks for the link
Posted by fif, Sun Nov-19-23 09:31 AM
With Dwarkesh Patel I linked above if you dig the Neel Nanda. Link: https://youtu.be/Nlkk3glap_U?si=T4GRiyff7Y9057P8

Amodei is CEO at Anthropic, his PhD is in physics. Lot of physicists working at Anthropic. Says physicists learn fast and field of machine learning small enough that they can get up to speed fairly quickly.

Amodei intentionally dodges the limelight (contra Altman who seemed on giddy ego trip). Says the idea of companies as battling figureheads is a distraction from substantive discussion. Has seen many colleagues warped by social media's snare, seen them become addicts, fiending for what he sees as the meaningless approbation of faceless groups of users. Winning approval a corrupting distraction from the realness. Lessons in there for us all.

Refreshing to hear him say so. Anthropic Claude 2 second best rn, people should check it out. Free (though you will hit a wall of free inputs pretty quickly) + it having a native 100k token context window is huge. Anthropic realizing the importance of the window size and delivering a chunker from the jump is a sign they know what's what.

Amazon recently invested $4bn in Anthropic. Details scant last I checked. Will this corrupt their ethic somehow? Don't think so at this point. Whereas OpenAI squeezing up with MS seemed for Altman to mean: move fast maximize subscriptions now now now...get users locked in and loyal before Google drops Gemini...into a world where almost everyone who uses a phone or computer interfaces with google their products.

Sutskever and co (OpenAI) seem to be at state of the art training LLMs. The implementation, the UI though I surprisingly shoddy (gotta believe Google can do better there easily). But Sutskever had a lot more $ to horsepower gpt4 than Amodei's team had when crafting Claude 2.

So big q I have, as I mentioned above, is how much does individual talent matter vs dollars for compute and datasets. Lotta little tricks make the pudding. But how many are truly secret now and how long will any of these remain siloed? Data set massaging, hitting those pressure points just right. Repeated training runs a big first starter advantage...for now. But soon? 'info wants to be free' and all that. Can we ever dodge the rule that cash rules? Seems hard to bet against the doughiest bastards doughing up most from this. RN the knowhow is in the hands of a few interesting nerd squads.

Anthropic getting a 4billy pump of Bezos' sticky green juice...Claude 3 gonna rise.

negotiation maybe went like this..

Jeffy B asking Dario...'ok how much scrill u think u need to overtake gpt-4?' 'eh mebbe tree fitty--nah, second thought sez make it 4 big ones'

*bezos' tongue flickers into his Amazon Basics purse extracting 4 translucent writhing bills faced, not with dead prez, but holograms of seething red rivers surging from their source-waters: the blood of a billion babies*

'but sir! This is most vile! Blood money most foul-begot, surely!'

'Dario, Dario, Dario...funds are funds and fair is fair...you stick to where you meddle best, forge machines that can pass this test: when veins rage roidy upon my head, and most small countries wish me dead, I'll soar past in my dick shaped rocket, that's the cue for you to fill my pocket, if you don't want this grown baby to scream, make this machine mint me green!'

'well ok you nasty man, we'll start to work as fast we can. Though your proposition is sordid and slimy, our opposition, I often think, is far more grimey.'

Hm the good guys Anthropic? Iunno Amodei makes a good impression me. But he is a doomer. And so probably doomed. If u ain't mashing for cash...see ya later, the quaint minds that made it, left behind by them with the cash to take it? Very little stomach for doomers among the lucre-lickers with the loot-levers. Lose billions in market-share because many king nerds cry a hypothetical apocalypse is nigh? Rather go out immolated, last of the living, when the cash mountain catches fire finally. Cannot ever ever lose grasp on pole position in the race for the fattest stack. Or...cooler than Bezos forehead in his cryo tank coasting out among the stars while the rest of us burn with planet earth, our place of birth--whole he self-appointed Jeff sole controller of the universe. The nerds that tinker fantasized robot utopia, their cognitive powers so great, what they expressed will be yoinked, encoded in 0s & 1s, you are not needed now Google says to the alchemist Sutskever. All the math in ya head, the hands to type it out. And Ilya, you're another brick of Soylent in the wall, no different than the factory worker piecing together iPhones, gluing Nikes.

Shame the machines cost so much to build then constant cash infusions needed to keep em churning. And so now we got the greatest at greed, the soulless several, the richest rats around, lining up to fulfill their koans at the CREAM machine. Dolla Dolla bill y'all.

--

Point was...Amodei big on mechanistic interpretibility as important going fwd to make certain the machines don't close our curtain. Paper paper clip y'all.

--

https://youtu.be/EU7PjYLruuM?si=ADFL3BqNwjwzxVXi

--

Ha some nonsense for y'all
13494795, gotta invest!
Posted by Rjcc, Sun Nov-19-23 07:10 PM

www.engadgethd.com - the other stuff i'm looking at
13494794, see my previous reply LOL
Posted by Rjcc, Sun Nov-19-23 07:10 PM

www.engadgethd.com - the other stuff i'm looking at
13494775, OpenAI’s board must be some idiots. They blindsided Microsoft who
Posted by soulfunk, Sun Nov-19-23 10:41 AM
Own almost half the company with that firing - while trading was still open causing Microsoft to lose billions in market value, and now it looks like Sam will be back with the old board ousted???
13494793, (they're all idiots who've been given a bunch of money)
Posted by Rjcc, Sun Nov-19-23 07:09 PM
once you actually understand what the generative AI space is, you'll realize that what you have is

a lot of religious zealots who want to worship a machine computer god

and/or (there's a lot of overlap)

some talented engineers who understand how to do the math that makes generative AI work, but are not capable of doing their own laundry or interacting with other human beings or having any life experience whatsoever

www.engadgethd.com - the other stuff i'm looking at
13494800, So now Microsoft just hired Sam Altman???
Posted by soulfunk, Mon Nov-20-23 04:54 AM
What a wild weekend. Apparently OpenAI wanted to hid him back, which he’d only agree to if they fired the rest of the old board. They instead hired a new CEO and Microsoft is hiring him and Greg Brockman. And a bunch of their engineers.

https://www.marketwatch.com/story/microsoft-hires-sam-altman-after-openai-fails-to-bring-back-ex-ceo-5325a1e0?mod=mw_square
13494837, microsoft didn't invest in openai to not get generative AI tech
Posted by Rjcc, Mon Nov-20-23 03:20 PM
we'll see what happens

ain't no way altman sees himself as a microsoft employee

right now, this is a tactic to get him back in openai, we'll see what happens

www.engadgethd.com - the other stuff i'm looking at
13494850, What do you suspect the reason is that the board fired him?
Posted by soulfunk, Mon Nov-20-23 07:04 PM
The speculation since Friday had been that it was arguments on direction in safety between effective accelerationism vs effective altruism (AI boomers vs AI doomers) because of Sam pushing for commercialization, but the new CEO Emmett Shear at the end of the statement below is saying it wasn’t about that. (I’d guess that it really WAS about that but Microsoft doesn’t want that perception out there so they are trying to be careful about what’s said publicly after the sloppy PR weekend they had.)

https://twitter.com/eshear/status/1726526112019382275
13494851, only they know for sure.
Posted by Rjcc, Mon Nov-20-23 07:10 PM
the reasons given to employees are allegedly "he assigned the same project to two people" and "he gave two board members different opinions about someone"

idk what happened

www.engadgethd.com - the other stuff i'm looking at
13494937, the Post went in on him
Posted by shygurl, Wed Nov-22-23 11:54 AM
Basically he's a thinly veiled sociopath who's only in it for himself and didn't want any oversight from the board. (aka your typical libertarian tech douche)

Gifting the article cause I never use that function:

https://wapo.st/47lUdeA

13494941, Of COURSE he's a libteretian tech bro
Posted by handle, Wed Nov-22-23 12:45 PM
>Basically he's a thinly veiled sociopath who's only in it for
>himself and didn't want any oversight from the board. (aka
>your typical libertarian tech douche)
>
>Gifting the article cause I never use that function:
>
>https://wapo.st/47lUdeA
His crypto eyeball scanning shit is crazy.

But he got 97% of the company believing in him - who are seeing big dollar signs vs. the stated intention of the non-profit to try to get AGI working in a 'safe' manner.

OpenAI is simply a product company now looking to profit bigly over anything else.

I'm certain they'll restructure so the non-profit has very little power over the company and focus on big money.

He's walking Andresen's and Thiel's footsteps - both shit heels.



13494922, Altman back as CEO after 97% of employees threatened to quit
Posted by PimpTrickGangstaClik, Wed Nov-22-23 09:48 AM
I don't know anything about these people. But I know the movie is going to win an Oscar.

https://www.npr.org/2023/11/22/1214621010/openai-reinstates-sam-altman-as-its-chief-executive
13494943, the thing about the AI field is
Posted by Rjcc, Wed Nov-22-23 01:02 PM
you got these people who are so good at it

but they don't understand social anything at all

so they've created these systems that tell them it's because they're better and smarter than everyone

but really most of them are just assholes (I've done my 10,000 hours, I'm an expert on this)

www.engadgethd.com - the other stuff i'm looking at
13494947, Woulda been a great storyline on Successoin
Posted by Triptych, Wed Nov-22-23 01:43 PM
13494972, Only started following this Sam Altman drama the last couple of days....
Posted by Buddy_Gilapagos, Wed Nov-22-23 10:16 PM
And I am pretty sure he is the villain.

Everything sounds like he was Team move fast and break things and the other side was let's slow and consider the skynet ramifications.

IDK. It's the first Ive looked at it.

This is shaded by my wife being pretty involved with a sector that wants to give up the ghost to AI and she is pretty sure it's going to end up disastrously. Enough that it was a factor in her leaving it.


**********
"Everyone has a plan until you punch them in the face. Then they don't have a plan anymore." (c) Mike Tyson

"what's a leader if he isn't reluctant"
13494973, lmao.
Posted by Nopayne, Wed Nov-22-23 10:47 PM
13494990, ^^^^ most accurate comment in here.
Posted by Triptych, Thu Nov-23-23 11:07 AM
.
13494992, Reuters swipe…SKYNET lol
Posted by soulfunk, Thu Nov-23-23 02:22 PM
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
By Anna Tong, Jeffrey Dastin and Krystal Hu

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

Advertisement · Scroll to continue
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Advertisement · Scroll to continue
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Advertisement · Scroll to continue
Reuters could not independently verify the capabilities of Q* claimed by the researchers.

'VEIL OF IGNORANCE'

Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S.... Acquire Licensing Rights Read more

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Advertisement · Scroll to continue
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker
13495021, so
Posted by Rjcc, Sat Nov-25-23 12:55 PM
the people who reported on this for reuters do know AI stuff

but the sources we talked to claimed there was no letter.

also, the breakthrough isn't...exactly as described.

there have been models that can do this type of stuff and it's still a zillion miles from AGI, they didn't say how much computing power it takes, etc.



www.engadgethd.com - the other stuff i'm looking at
13495024, finger on the pulse is it?
Posted by Triptych, Sat Nov-25-23 06:08 PM
13495070, ...what?
Posted by Rjcc, Mon Nov-27-23 09:51 AM

www.engadgethd.com - the other stuff i'm looking at
13495109, Yeah. It caught my attention because Reuters journalists
Posted by soulfunk, Mon Nov-27-23 06:18 PM
are not at all typically alarmists or quick to just break a story. (Disclaimer - I work for Thomson Reuters.) I’m seeing Verge also saying that there wasn’t a letter. I’d guess we’ll hear more details on it this week. It’s also possible that Q* is a huge breakthrough but not at all “close” to AGI. Even in the above article it seems more like there being internal excitement and apprehension about where Q* could go, but may about where it currently is.
13495111, (I changed jobs a couple years ago I work at verge now)
Posted by Rjcc, Mon Nov-27-23 10:23 PM
the info I could pull together is that a. the stuff the researchers reported isn't that different from what others have seen, and one of the big questions is how much computing power they needed to pull it off, all that's known is a lot but like...there are degrees of a lot



www.engadgethd.com - the other stuff i'm looking at
13495117, Ha! Makes sense.
Posted by soulfunk, Tue Nov-28-23 07:41 AM
13495119, We make skynet jokes but how real real a threat is an AI Apocalypse?
Posted by Buddy_Gilapagos, Tue Nov-28-23 08:42 AM
We are all trained by Sci-fi movies to instantly think humans versus AI when we talk about AI advancement but the folks who are really really looking at it, have the seriously gamed out how AI could possible pose existential threats?

And I am not talking threats to industries, creative industries and scammer tools, I am talking about AI taking over the grid and crashing planes and setting off nukes, like how real of a threat is that potentially?

Who has really explored this question?


**********
"Everyone has a plan until you punch them in the face. Then they don't have a plan anymore." (c) Mike Tyson

"what's a leader if he isn't reluctant"
13495120, AI safety is absolutely being explored and discussed. It's likely
Posted by soulfunk, Tue Nov-28-23 09:05 AM
the biggest issue in AI (well, maybe second biggest after the "AI will take all our jobs!" issue, but that's more of an issue brought up by people not knowledgeable on AI). I mentioned it above in the discussion on Sam Altman and effective accelerationism vs effective altruism (AI boomers who want to continue accelerating the growth and commercialization of AI vs AI doomers who are concerned about safety as we move towards AGI).

Here's a decent article:

https://www.ed.ac.uk/impact/opinion/openai-sam-altman-war-doomers-boomers#:~:text=Boomers%20vs%20Doomers,to%20the%20survival%20of%20humanity.

OpenAI corporate chaos reveals the war between AI 'doomers' and 'boomers'
Creating two camps in discussions of AI's future - those seeing opportunity versus those seeing a threat - is overly simplistic and could actually be a distraction.

Dr Gina Helfrich, Baillie Gifford Programme Manager, Centre for Technomoral Futures

The abrupt dismissal of OpenAI chief executive Sam Altman sent shockwaves through the world of artificial intelligence. But after Greg Brockman, the company’s president, quit in solidarity with Altman and more than 700 of its 770 employees threatened to do the same if Altman was not reinstated, it now appears he will return.

Instead, the OpenAI board that claimed Altman “was not consistently candid in his communications with the board”, without elaborating further, is to be revamped with new members. The lack of clarity about the reasons behind the split fuelled considerable speculation with a focus on ideological or philosophical differences about the future of artificial intelligence (AI).

Altman is known for pushing the AI industry to move quickly to release new AI-powered tools that others might have said were yet not ready for public use, like ChatGPT. It’s been suggested that the OpenAI board members who initially forced Altman out are more cautious; they worry about potential ‘existential risks’ they believe are associated with powerful AI tools and generally promote a slower approach to the development of increasingly larger and more capable generative AI models.

Boomers vs Doomers
These two ideological camps are sometimes referred to as ‘AI boomers’ – those who are ‘techno-optimists’, eager to hasten the benefits that they believe advanced AI will bring – and ‘AI doomers’ – those who worry that advanced AI poses potentially catastrophic risks to the survival of humanity.

The most extreme AI boomers decry any efforts to slow down the pace of development. Marc Andreessen, a billionaire venture capitalist and boomer, posted a ‘Techno-Optimist Manifesto’ in October in which he claimed that “social responsibility”, “trust and safety”, “tech ethics”, “risk management”, and “sustainability”, among other terms, represent “a mass demoralisation campaign… against technology and against life”. He also listed “the ivory tower” – in other words, our respected institutions of higher education – and “the precautionary principle”, which emphasises caution when dealing with potentially harmful innovations, as being among the techno-optimist’s “enemies”. You can see why someone might be concerned!

On the flip side, doomers are consumed with anxiety over the possibility that advanced AI might wipe out humankind. Some of OpenAI’s board members are affiliated with the Effective Altruist movement, which funds AI safety and AI alignment research and worries over the potential of this technology to destroy humanity.

The UK Government seems to be in the thrall of the doomers. Ian Hogarth, who leads the UK’s Frontier AI Taskforce, formerly known as the Foundation Model Taskforce, penned a viral opinion piece for the Financial Times in April, calling for a slow-down in “the race to God-like AI”. Rishi Sunak’s AI Safety Summit, held in early November 2023, was focussed on “existential risk”.

Shared doubts
Despite these differences, both boomers and doomers have one key belief in common: that we are just on the cusp of creating artificial general intelligence (AGI). You’ll be familiar with this thanks to the movies: Hal of 2001: A Space Odyssey, J.A.R.V.I.S. from the Iron Man and Avengers movies, Samantha from the movie Her, and of course the Terminator from the eponymous film are all examples of what Hollywood thinks AGI might look like. Boomers think it will bring amazing benefits, whereas doomers fear that, without precautions, we may end up with something more apocalyptic.

Nick Bostrom, co-originator of the Effective Altruism movement, explains their fears in the form of the “paperclip maximiser”: Pretend an otherwise harmless advanced AI technology had been set a goal to make as many paperclips as it could. An AGI of sufficient intelligence might realise that humans could thwart its paperclip maximising by either turning it off or changing its goals. Plus, humans are made of the same things paperclips are made of – atoms! Upon this realisation, the AGI could take over all matter and energy within its reach, kill all humans to prevent itself from being shut off or having its goals changed and, as a bonus, our atoms could then be turned into more paperclips. Truly a chilling thought experiment.

AI future is now
There is a third perspective, however – one that I share. I don’t believe we are anywhere close to the creation of AGI, and ‘existential risk’ is largely a bugbear and a distraction. But I still believe in the promise of AI, if only it is developed, governed, and applied responsibly to the areas where it can make a positive impact on human quality of life.

Given the emphasis on risk reduction, it might seem those who share my view have commonalities with the doomers. However, the main difference is that we believe current regulatory and safety efforts are best focussed on the many actual and present harms of AI tools, including, but not limited to, psychological harms suffered by gig workers hired to sanitise generative AI models, social harms caused by the persistent and endemic bias of generative AI models, and environmental harms such as the massive water and carbon footprint of generative AI models. Those who share my point of view were responsible for organising an ‘AI Fringe’ around the AI Safety Summit – focussed on addressing the real impacts of the technology, including on historically underrepresented communities and civil society, and diversifying the voices within the AI ecosystem.

While the ultimate fallout from Altman’s firing and rehiring is not yet clear, powerful actors at OpenAI, Microsoft, and other companies developing advanced AI are keen to direct public focus to hypothetical, ‘existential’ risks or potential future benefits of their technologies. I suggest that we would do well to remember that the harms of AI models are not just hypothetical, but all too real.

The article was first published in The Scotsman November 22 2023, Read the original.

The views expressed in this section are those of the contributors, and do not necessarily represent those of the University.
13495133, Yeah but I am wondering what exactly is AI Safety concerned about.
Posted by Buddy_Gilapagos, Tue Nov-28-23 11:47 AM
Even the “paperclip maximiser” example doesn't make a lot of sense to me. There are a lot of steps between making paperclips and wiping out humanity. I want to hear about all those steps in between to making that remotely possible.

Like is there AI Safety needs beyond maybe not letting AI control nuclear plants or even airplanes?

Beyond thought experiments what are the AI Safety people imagining could go wrong? And I am not asking the rhetorical "what could possibly go wrong?", I am really asking what exactly could go wrong.

And I also get being hesitant until we are really sure what could go wrong (and I am wondering if that's really where all the AI Safety people are).





**********
"Everyone has a plan until you punch them in the face. Then they don't have a plan anymore." (c) Mike Tyson

"what's a leader if he isn't reluctant"
13495136, what's also important to remember is that the paperclip maximizer guy
Posted by Rjcc, Tue Nov-28-23 12:30 PM
founded effective altruism

which is the dumbest religion anyone has ever come up with

if you ask someone in it if it makes sense, they'll scream at you that you just want to kill poor people and you don't want them to have mosquito nets to prevent malaria


but once you look it up you'll see what it is -- prosperity preaching

they tell themselves it's ok that they make a lot of money because they're obviously also the smartest philanthropists who've ever lived and they'll donate it in a better way than someone else would have.

they don't verify any of this information.


www.engadgethd.com - the other stuff i'm looking at
13495129, there are a lot of ways to answer this
Posted by Rjcc, Tue Nov-28-23 10:55 AM
to me there are two things that are important to consider

one: uhhhh no one really knows?

two: AI apocalypse warning is also AI marketing

the most likely possibility is that absolutely none of that is even remotely possible, and that some math nerds have built a text / image generator that you can use for crappy marketing


this isn't me being pissy about AI, just like, look up literally anything gregory hinton has said for like the last 15 years.

dude's a genius at building AI tools, but he's a fucking idiot about everything else and every prediction he's made about anything other than what he's an expert at has been 100% wrong, and the predictions for the stuff he's an expert at are like 50/50

www.engadgethd.com - the other stuff i'm looking at
13495138, a different take on it
Posted by Rjcc, Tue Nov-28-23 12:56 PM
I'm not super into doctorow usually but someone has to say this stuff

https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space

"On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot."


"The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."

Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.

It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords."

www.engadgethd.com - the other stuff i'm looking at
13495143, OK yeah. this is a good read.
Posted by Buddy_Gilapagos, Tue Nov-28-23 02:27 PM
Lines like this:

"billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":

But I guess it never occurred to me that even the AI Safety people are in on AI Marketing scheme and not engaged in the present day real AI concerns like "ghost labor, algorithmic bias, and erosion of the rights of artists and others."

I can file these folks in the same category as the Elon Musk types who are more eager to spend billions on an escape plan to Mars in case of an asteroid then fed and house the homeless.

Sure there is a nonzero chance of a asteroid or AI wiping us out, but its also just a way to not focus on the real issues facing us today.


**********
"Everyone has a plan until you punch them in the face. Then they don't have a plan anymore." (c) Mike Tyson

"what's a leader if he isn't reluctant"
13495164, nice pull quotes. last sentence = BARS.
Posted by Triptych, Wed Nov-29-23 08:42 AM
Personally I think AI can either save or destroy us - kinda like any new transformational power. Working to make sure it's the former.

13495177, This is long but good -- vitalik essay posted two days ago
Posted by fif, Wed Nov-29-23 02:08 PM
https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html#superintgood

Probably not best source for the nitty gritty, 'how and what could happen' of your question but Vitalik is a guy worth reading. He's at 10% p(doom) --ie shit goes catastrophic, humanity big-time fucked up, maybe all dead.

Can run it through Claude 2 for summary.

I wrote some shit out related...1100 words lol. Some stream of consciousness reflections on some of my thinking about the doomers I've met. Largely misguided, deluded. But can't dismiss them with a hand wave is my short view. Fascinating to think about it all. Talking to people steeped in it is
.. interesting. Odd lot. But smart, interesting. I disagree with much of it, chopping it up with the doomsayers...good for tightening up my thinking I find
13495178, Melanie Mitchell - Machine Learning Street Talk
Posted by fif, Wed Nov-29-23 02:20 PM
This is a good overview on some of the wrong thinking doomers are caught up in.

https://open.spotify.com/episode/3xHGYLOwGZzm7RCRhE27L4?si=8EpyrEQbS-qkNN6CdJ3Shw

Comp sci people. A bit out of their depth on cog sci, psychology, neuro etc. their day to day is screens. And computation. They make bits scale exponentially with almost no effort all the time. "Intelligence", human minds...don't work quite the same. Lot to explain. Ive been trying to formulate my positions better but tend to run on and on
But I line up with a lot said in this podcast...better to let them say it
13495192, like I said many of them are geniuses in their field
Posted by Rjcc, Wed Nov-29-23 04:14 PM
but they're extrapolatinng to guess at human behavior in ways that they don't have even a limited understanding of, and haven't bothered to study because they don't really have interest in it.


www.engadgethd.com - the other stuff i'm looking at
13495193, My CTO tried to tell me that AI is going to take over content creation
Posted by Buddy_Gilapagos, Wed Nov-29-23 04:58 PM
Not that artist will use AI as tools, but rather we will listen to songs generated by AI and watch movies written and created by AI. It was just the most absurd thing I heard.

I also think no one wants to be the schmuck who says something is impossible that ends up actually happening.


**********
"Everyone has a plan until you punch them in the face. Then they don't have a plan anymore." (c) Mike Tyson

"what's a leader if he isn't reluctant"
13495206, This has been one of the most surprising/telling things
Posted by fif, Wed Nov-29-23 10:15 PM
I've found talking to some in this crowd. Asked months ago someone if/when an LLM would write prose/novels at the level of Proust, Austen etc. psychological realism. This wicked sharp brainiac ML engineer who'd impressed me chatting this and that on engineering physics computerese...thought they were already basically there. This was when gpt3.5 was top of the pops. Took a poll around asking people their favorite fiction. Terry Pratchett, Douglas Adams etc. which w/e... is what it is...but almost no one named something that lets you leap into another soul through the page. A great novel..always felt to me this magic thing..as close as I can come to knowing what it is to be another. The duration, the interiority.

Have found many of em have no access or interest in that. Mindblindnesses... lacking the muscles for empathic leaping is something I've found common. Hypothetical worlds with cardboard characters to carry the joke or idea is their zone. "Intelligence", brains more computerlike than the others: their strength. Understanding minds of others (and their own): their weakness.

Not sure that makes sense. Really interesting to interact with them. Anyone interested should seek them out. If u like to argue, it's fun.

Important to say not all the same. Generalize across a large umbrella and you're gonna be wrong. But I've seen a pattern there in more than a handful. I find them fascinating and have learned a lot talking with them. Anything is up for discussion...but they might go on to slice and squeeze the bigness out of it to fit it to some beloved way of logicking. Not good at saying I don't know, we can't know. Always wanna roll Bayes out and predict predict predict. Can't see the cultishness cuz large part of their steez is being the most cult immune.

But yea the idea that the machines can't see the world at all, have no desires..but will soon generate great art based in it...nah. BUT most movies that do numbers are formulated schlock. And/or set in a virtual world already. Space, fantasy land...imo success of Marvel movies: a leading sign of USA cultural decline. Avoidance. Aversion to the mirror. Can't render this world in ways to capture people's attention. Also going to some neutral ground...sells better in China. Sequels sell...LLMs can do lowgrade sequels to lowgrade human scripts, so maybe, sure why not?

Just spouting. Some time off been writing some could go on for days here. Trying to make some sense of it all. LLMs most amazing thing I've seen, very energizing.
13495243, yes a lot of popular things are formulaic
Posted by Rjcc, Thu Nov-30-23 12:21 PM
that does not mean that they can be generated from a formula

or that even if you had the magic code to make a hit song (someone claims they do every other year, or has the magic code to music promotion, etc.) that it would actually work.

usually the people pushing these tools don't understand that for every popular thing you're talking about, there's a person at the end of it making the calls and that's usually what drives it

www.engadgethd.com - the other stuff i'm looking at
13495345, Don't mean it can be, don't mean it can't be
Posted by fif, Fri Dec-01-23 08:30 PM
We don't really know do we. People responded well to Star Wars and auto tune. I'm agnostic about it all. GPT-4 (and really even 3) dropped my jaw. What it can do with language...not something I thought was near.

All this "intelligence" talk. It's odd. Nuclear science...physical experiments started to show...oh shit we can make big big energy from this. So the bomb could be foreseen. Measurable, predictable phenomenon foretold it.

Electricity...makes boops and bips morse code, light bulbs, radio, TV...and so on. Information carrying stuff that can look sound like the world. TV is impossible to the person in 1850. Magic. Don't have to know all the ins and outs of every cell in the eye/brain to make images that simulate seeing pretty well.

Can we simulate intelligence? Don't know. I tend to think the way people talk about ASI is confused. Is intelligence something that just scales forever, conferring greater and greater powers with every increase? To do what? Map all the atoms? Predict their every movement? Laplace's demon. Hm with what sense organs. And why? Doesn't make sense to me that doomers get super specific about ASI tricking gene labs to brewing them up some human extinction drug. Tricking people like a wizard and shit. Wtf are y'all talking about? But but...it'll be so smart man you can't even imagine! Ok...so cuz a computer can do great with words....Jedi mind trick magic machines are around the corner?

We don't know ourselves. Brain is a black box. Mechanistic interpretibility of humans? Neuroscience. Don't really know much at all about how our body computers do what they do. Fmri gives very crude low res maps. Can't predict a person's experience in an hour. But we can predict that a thing that doesn't exist might try to manipulate scientists into building it an army of nanobots to take over the world? Bizarre. Ignores physical reality. These people are disembodied, detached. But mp3s, a TV show...these are virtual already. Easier for machines to mimic, I would think.

--

Altman interview on verge. Did he do that just to fake-bashfully shake his head at the leak of Q* to keep investor hype up? Otherwise not a lot there. No press on ai alignment board members worried about apocalypse?

Funny to see that interview and then a recap directly under written by guess who...
given your thoughts on the guy. Only giving u a hard time... But c'mon if only dumb mfers listen to that dumb mfer Sam...seems you now got Satya Nadell on the dumb mfer list.
13495678, satya nadella is his boss
Posted by Rjcc, Sat Dec-09-23 02:35 AM
he doesn't listen to the guy.

he has a nerd to build him smart clippy, he don't care about the other shit

"No press on ai alignment board members worried about apocalypse?"

the what?

sam knows who I am, where I work, and what I think of him




www.engadgethd.com - the other stuff i'm looking at
13495687, Recap God 💪🏾
Posted by Triptych, Sat Dec-09-23 07:30 PM
13495689, Back to being bizarre
Posted by fif, Sat Dec-09-23 09:49 PM
Every report said Satya was blindsided and pissed by the ouster and did everything he could to get Altman back. Altman is the CEO. Satya wants Altman at the table and was ready to move mountains to make it happen. Yea rjcc, he doesn't listen to him. What goes on in your head? Your takes are very strange. Alternate universe guy. Thought for a minute you'd leveled out and returned to the same world as the rest of us.

Your thoughts move the stock market, Altman knows you got him clocked. There you are in the center of the universe
13495700, LOL.
Posted by Rjcc, Mon Dec-11-23 01:15 AM
do you take orders from the people who work for you?

do you understand how "reports to" works?

it means altman listens to nadella.

nadella doesn't care that altman is an amoral shit, he just wants dude to build his office 365 chatbot. it's not hard to figure unless you're the dumbest motherfucker in the world.

www.engadgethd.com - the other stuff i'm looking at
13495707, Thx, great observations
Posted by fif, Mon Dec-11-23 10:48 AM
You explained it all so well.
13495714, no prob!
Posted by Rjcc, Mon Dec-11-23 01:04 PM

www.engadgethd.com - the other stuff i'm looking at
13495144, https://global.discourse-cdn.com/openai1/original/4X/9/8/d/98dd0579c64e999f145c7e1ac652b4187c8a8512.jpeg
Posted by Triptych, Tue Nov-28-23 02:49 PM
https://global.discourse-cdn.com/openai1/original/4X/9/8/d/98dd0579c64e999f145c7e1ac652b4187c8a8512.jpeg
13495582, Google releases Gemini…
Posted by soulfunk, Wed Dec-06-23 05:55 PM
https://deepmind.google/technologies/gemini/

https://www.reuters.com/technology/alphabet-unveils-long-awaited-gemini-ai-model-2023-12-06/
13495606, it hallucinates REALLLY bad
Posted by Rjcc, Thu Dec-07-23 12:22 PM

www.engadgethd.com - the other stuff i'm looking at
13495682, The smoke and mirrors they used to demo it
Posted by soulfunk, Sat Dec-09-23 08:10 AM
are hilarious…not sure why they thought that would be a good look.
13495607, openai board member on why they fired altman: "man, idk"
Posted by Rjcc, Thu Dec-07-23 12:33 PM
https://www.wsj.com/tech/ai/helen-toner-openai-board-2e4031ef

Toner maintains that safety wasn’t the reason the board wanted to fire Altman. Rather, it was a lack of trust. On that basis, she said, dismissing him was consistent with the OpenAI board’s duty to ensure AI systems are built responsibly.

“Our goal in firing Sam was to strengthen OpenAI and make it more able to achieve its mission,” she said in an interview with The Wall Street Journal.

Toner held on to that belief when, amid a revolt by employees over Altman’s firing, a lawyer for OpenAI said she could be in violation of her fiduciary duties if the board’s decision to fire him led the company to fall apart, Toner said.

www.engadgethd.com - the other stuff i'm looking at