News:

Precision Simulator update 10.173 (24 February 2024) is now available.
Navburo update 13 (23 November 2022) is now available.
NG FMC and More is released.

Main Menu

Midjourney and AI Images

Started by farrokh747, Mon, 15 May 2023 07:45

farrokh747

perhaps you've been following whats happening with AI image generators... this is clearly going to upend the business... Glad this is happening towards the twilight of my commercial working life... :)

https://www.blind-magazine.com/stories/how-ai-imagery-is-shaking-photojournalism/

https://www.midlibrary.io/categories/photographers

https://twitter.com/Knightama_/status/1657011656369623042

https://twitter.com/iamneubert

many, many more online.... MJ is on v5, what will v6 and onwards be like...?

Ditto Ai for music, video, etc....

FC


Hardy Heinlin

These AI pictures don't touch me ...

Nice for technical stuff, but not for emotional stuff, at least not for my antennas.


|-|

United744

AI is so overrated.  ::)

AI = Artificial Idiot.

No doubt a couple of those images are very convincing, but more than a cursory glance reveals problems.

Eyes. You can't fake eyes. Everyone looks dead.

Jeroen Hoppenbrouwers

#3
I believe AI will be immensely (mens = human in Dutch) better in combining and replicating existing information, which includes digital imagery and audio, than any human. If you define intelligence as such, artificial intelligence has been born. Statistically what comes out is the closest to what got in without being the same. So doing again what has been done before, even if the AI has never done it before, is going to be really simple.

Like landing an airplane.

Until the situation is just not exactly the same as before, and then it throws a dice and tries to land just not exactly the same.

If AI becomes sufficiently human, it will also start to err as a human.

Who monitors the AI?

Hoppie


edit
A long time ago, a mathematician was shown a bunch of aircraft that were shot up but made it home, and people were busy strengthening the armor and construction of those spots that had been shot up the worst, to be better prepared for future attacts. The guy scratched his head and said that instead, those airplanes that did NOT make it home showed where to put extra armor, and that the airplanes they looked at showed where they did NOT need to reinforce (simplified).
https://www.wearethemighty.com/popular/abraham-wald-survivor-bias-ww2/

Translated to autoland ... so we should not only teach the AI how to land a plane, but certainly also how to NOT land a plane. And this difference between good and bad needs to be made in the training material, just as with the spam/ham Bayesian filters for email. This is probably going to be a real issue, as there are plenty of subjects where good/bad isn't easy and often even a discussion point or a politically determined trench where people have dug in.

farrokh747

some more..

https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated

reams and reams of discussion online... on photography and the larger impact of this new AI...  photoshop, etc have been doing this in a small way (compared to this new AI) for some years now...  (autotune with voice)

perhaps the impact will be similar to that on art/painting when photography was invented... 

fc

Hardy Heinlin

Quote from: Jeroen Hoppenbrouwers on Mon, 22 May 2023 10:20If AI becomes sufficiently human, it will also start to err as a human.

It errs already. Maybe not erring like a human (what is human anyway?). But it errs. When creating academical texts using a limited number of known facts, it adds coined, random data. When creating journalistic photos, it generates wrong fingers (six or four fingers, "sausage fingers" etc.) and other bizarre effects. Sure, it will get better. But the errors are already there, and I think the number of errors will not decrease as the future, more complex AI structures will introduce new errors. Even if it wouldn't make any errors anymore: Isn't erring a basic principle of learning?


|-|

Will

#6
So-called AI is just a faster and more comprehensive search engine. If I said "write a poem about a leaf," it could see how the words "write," "poem," and "leaf" have been connected to each other and connected to a million adjacent topics and then convincingly generate output from that which is online and searchable, that is linked probabilistically to what other people do when they "write" "poems" about "leaves."

But that's just dictionary work. What makes it seem like intelligence is that it operates at a time scale that makes it seem intelligent. And of course since AI has the whole internet to look at and link up in associations, AI seems incredibly smart. But again, it's just dictionary work. Kind of like how the editors of the Encyclopedia Britannica compiled the world's knowledge about leaves to write their entry on "leaf," but a trillion times faster.

Jorge Francisco Isidoro Luis Borges Acevedo would have loved this. Or been terrified by it. Or both.

In addition to Borges, I'm reminded of the library in Eco's Name of the Rose, which contained all knowledge, both quotidian and transcendental, both funny and forbidden; compiled, illuminated, and indexed by an army of professional scribes who did nothing else except that (and raise vegetables, and murder each other). Of course, the library burned down in the end.
Will /Chicago /USA

Hardy Heinlin

Isn't the human brain too just a search engine that looks up data in its memory? Like AI?

And when the human brain is creative, isn't it just a random combining machine that plays with fragments of its memory until it discovers a "beautiful" combination? Like in AI? ("Beautiful" typically means, it reminds of something that already exists, or of something that is programmed to be "beautiful", like certain harmonies etc.)

Maybe it's just the consciousness that makes the difference between the biological creature and the digitial creature.

Can the digitial creature develop consciousness? Maybe consciousness has nothing to do with intelligence at all. Maybe its a function of biological systems, not of intelligence.

So, in the end, maybe, the biological intelligence and the digital intelligence are very similar -- the only difference is: The digital system is unable to enjoy its existence.


|-|

Will

Yes, I agree with that; the human brain is also an incredibly skilled association machine. And what makes us different I think is emotions-as-motivators, like longing, envy, love, wistfulness, melancholy, anxiety, and doubt. Your creativity or your behavior can be influenced by these feelings that don't seem just like associations. They are motivators, too. And of course our biology, like hunger, sex, fear and adrenaline, heat and cold, and pain. Those things seem experienced in a way that is more than just associations.

We could tell an AI to act as if it's in pain, or sad, but I think (I think?) that the AI would be just "doing as if" and not feeling motivated by emotion/

It's amazing how so much science fiction already thought about this. The android "Data" in Star Wars couldn't feel, and the Voight-Kampff machine from Blade Runner (that's the device that could, in the hands of a skilled operator, tell the difference between humans and replicants) was all about detecting physiological responses to the experience of empathy. Sort of like the next version of the Turing Test.

And how Skynet decided to destroy humanity within a millisecond of becoming self-aware in Terminator.
Will /Chicago /USA

Jeroen Hoppenbrouwers


martin

Quote from: hardyMaybe it's just the consciousness that makes the difference between the biological creature and the digitial creature.
But now you have to define "consciousness"... Good luck!  :D

Quote from: HardySo, in the end, maybe, the biological intelligence and the digital intelligence are very similar
The huge and essential difference is that biological intelligence (however you define "intelligence") is shaped and modulated, phylogenetically (during the evolution of the species) and ontogenetically (during the development of the individual) by constant interaction between the organism and its environment.

Artifical "Intelligence" isn't. The "AI" (in its current form)  may have ingested all texts from the Internet but all it "knows" is to connect any part of that to another part of that, but not in any way to the "real"*) world surrounding it. Any kind of "meaning" thus does not come into it, from the "AI"'s point of view; hence "knows" in quotation marks.

At least not until it is given some kind of sensors and the capabilty to "evaluate" their input; but that's not the case for most "AI" currently under discussion (and separate from robotics), as far as I know .
 
[What I (so far) really fear, inversely to most comments, is the discovery that "our" (human) intelligence is in large part actually like "AI", namely "just" a talent to combine words in a certain way without any connection to the real world and thus without any actual "meaning"...  ;D ]

Cheers,
ChatBo Mairtn

*) another can of worms, I know






Hardy Heinlin

I wrote: "Maybe consciousness has nothing to do with intelligence at all."

To be precise, I do think that consciousness and intelligence have a causal link.

But I don't think consciousness is a product of intelligence. I assume intelligence is a product of consciousness.

Along the evolution, consciousness is a rather old phenomenon. Intelligence, on the other hand, is probably a  younger thing. So, provided, evolution cannot drive backward, AI will never develop any consciousness.

If we look at brains of insects, there are mainly just basic functions which control the body motion and things like that, and which produce mental qualities like green, sweet, hot, and so on. There's not much space to store memories, thus there's not much experience that would enable the insect to make plans.

Evolution then added more features to that early brain model. The newer models were able to produce a greater number of mental qualities; not just blue, salty, cold etc., but also love, anger, sadness, euphoria etc.

All these qualities occur at certain events, at variable intensities. The intensity determines whether or not an event will be stored in the brain's long-term memory. The memory only contains things that are important to remember during the life time, or that are important to recall when making plans. The memory doesn't contain any boring stuff. It just contains events that were accompanied by intensive feelings: Very sad events, very happy, very disgusting, very beautiful events. At that stage in evolution, with all these emotions, I think consciousness did already exist. Maybe it existed even earlier.

Further in the evolution, the greater amount of emotional memories needed a greater memory management. And that's what I call "intelligence". It has a lot of bureaus located everywhere in that big hall that was built  around that old little insect brain which still sits in the middle of our head.

How do I define "consciousness" anyway?

Of course, that's a philosophical question. First, I would say consciousness has no specific location within the brain, nor is it a product of any intelligence (as I wrote above), nor is it a subject which is just there to experience qualities like red, salty, angry, happy.

Instead, consciousness is already implied in these qualities themselves!

When there is blue, that Blue itself already contains a Self, i.e. a consciousness. When there's fear, that Fear itself already contains a Self. These qualites are not really "experienced" qualities. There's no subject-object link. The forest and the trees are not two different things. The forest is in the trees. Likewise, the Self and the Blue are one unified mental atom. Indivisible.

Just a theory :-)


Quote from: martin on Tue, 23 May 2023 18:53At least not until it is given some kind of sensors and the capabilty to "evaluate" their input; but that's not the case for most "AI" currently under discussion (and separate from robotics), as far as I know .

Well, maybe not "sensors" in the sense of sensors. But if you feed it with pictures, texts, or audio, it doesn't make a difference whether that feed comes from a disk or from a sensor, does it? When we look at a landscape displayed on a monitor, do we see a picture of a landscape or a picture of a monitor?

Or a picture of our retina? :-)

Whatever it is, it's empirical information.

No?


Regards,

|-|ardy