News:

Precision Simulator update 10.175 (29 May 2024) is now available.
Navburo update 13 (23 November 2022) is now available.
NG FMC and More is released.

Main Menu

Nano Quadrotors

Started by farrokh747, Sun, 5 Feb 2012 07:14


frumpy

#1
:shock:
Try programming that!  :mrgreen:

Shiv Mathur

Haha ... but knowing Farrokh, he'd actually do it!

Phil Bunch

Utterly amazing.  Makes me wonder what the military are doing with these things!

How was the control of the swarm performed?  Was it done via something like Wi-Fi from a central computer or are the gadgets flying in a mutually supportive swarm independently and autonomously, under their own control?

Can the Skynet of the Terminator movie fame be far away?  

The Singularity is getting closer...

http://singularityu.org/about/overview/
Best wishes,

Phil Bunch

farrokh747


Shiv Mathur

Absolutely superb !

Phil Bunch

#6
I am just blown away by the fact that these things are autonomous and work interactively to accomplish things as complex as playing in a musical group while flying around.  This is quite different from having a large central computer sending control signals to each individual, although that would be difficult enough.

Somehow the AI (Artificial Intelligence) computing community has suddenly made a really large step in what they can do with their technology.

-----------------------------------
Below are some links on these things that I found while Googling:


Here's a list of publications from a Swiss research institute (ZTH):

http://www.idsc.ethz.ch/Research_DAndrea/FMA/publications

---------------------------
http://www.youtube.com/watch?v=1nYYl5Max2Q&feature=iv&src_vid=eWmVrfjDCyw&annotation_id=annotation_726383

https://www.youtube.com/watch?v=eWmVrfjDCyw Berkeley joins the AI quadrotor fun

http://www.eecs.berkeley.edu/~aaswani/LBMPC

https://www.youtube.com/watch?feature=iv&src_vid=eWmVrfjDCyw&annotation_id=annotation_748922&v=dL_ZFSvLXlU More Berkeley demos

https://www.youtube.com/watch?v=geqip_0Vjec&feature=related

https://www.youtube.com/watch?v=S-dkonAXOlQ&feature=related

https://www.youtube.com/watch?feature=iv&src_vid=eWmVrfjDCyw&annotation_id=annotation_748922&v=dL_ZFSvLXlU -- quadrotor gadget teaches itself to catch an arbitrarily thrown ball! In one of these videos it teaches itself to land smoothly as a helicopter. How are they doing this with a tiny CPU???

http://singularityu.org/about/board-of-trustees/dr-ray-kurzweil/ - Ray Kurzweil's "The Singularity is Nigh" university.'' He proposes and I have to agree that computers will soon and very abruptly take over from people.  The machines will need peoples' atoms to some extent but not our "bits".
Best wishes,

Phil Bunch

Jeroen Hoppenbrouwers

#7
For a long time I have been too close to the AI community for comfort and I don't agree with people that claim that AI will take over. Because AI is doomed to fail. We don't understand intelligence nearly well enough to have a fighting chance at programming it, and we don't have the resources to look up the knowledge instead (i.e., "assume we have an ontology" as the first line of the paper).

However, by making a tiny adjustment in the target, we can and will get somewhere.

Do not try to imitate a mind. Instead, try to imitate a brain.

A brain is stupid in the detail but staggeringly capable as a whole. A brain is a machine. A mind is the result of a machine plus nearly its complete history. We can make a brain, but not a mind.

Brains are not intelligent per se. They can accomplish very complex tasks without understanding what they do. And they most certainly are not self-aware. This is the key.

Dedicated brains for solving complex but isolated problems will exist, and do exist. These artificial brains will take over some parts of the world, and I hope nobody is stupid enough to let them take over one part too many. But they won't rule the world unless there is legislation that allows them to rule. Like, forbidding pilots to fly their machine themselves and always leave it to the brain. There, hostile takeover by machines --- NOT. Just legislation. Hostile takeover by, well, I need to remain polite.


Jeroen

Phil Bunch

#8
Quote from: Jeroen Hoppenbrouwers

Brains are not intelligent per se. They can accomplish very complex tasks without understanding what they do. And they most certainly are not self-aware. This is the key.

Jeroen

This part of your note reminded me of my ex-wife!  (insert many grins here)

Thanks for the comments and the benefit of your experience.  Your summary put a lot of pieces of this recurring puzzle in context and helped me clarify my own limited thinking about these things.

Here's a link to a TED talk by the U Pennsylvania group, and it explains more about how they work and how they've programmed them to be somewhat autonomous.  I thought the math behind their programming was interesting from an aerodynamics and mechanics perspective.

http://www.ted.com/talks/vijay_kumar_robots_that_fly_and_cooperate.html

(click the icon in the upper right corner to expand it to full screen)

Also, see this $300 version:

http://www.brookstone.com/parrot-ar-drone-2-quadricopter

"No ordinary RC helicopter, the Parrot AR.Drone uses military-grade technology for super-stable flight.

This four-rotor quadricopter employs dual ultrasonic altimeters, a three-axis accelerometer, multiple gyroscopes, and an embedded Linux platform to continuously stabilize itself during flight. AR.Drone can even compensate for turbulence caused by wind. In fact, it's so easy to control, anyone can fly it."
Best wishes,

Phil Bunch

martin

Quote from: Jeroen HoppenbrouwersA brain is a machine.
No.
(Just to make you explain what a machine is...)  :D

Hardy Heinlin

#10
My definition of machine:

A machine is man-made, a product of culture. It does something, something that a human is not able, or unwilling, to do.


My definition of brain:

A brain is a product of nature. It does something. It does many things.


My definition of culture:

Culture is what humans create. Indirectly, culture is created by nature. But not all nature is necessarily cultural since not all nature is solely human.


My definition of nature:

Nature is everything, including culture because humans are natural and the product of a natural product is natural, too.


My defintion of apple:

An apple is an apple and a fruit.


My defintion of fruit:

A fruit can be an apple. But it can also be a non-apple. Or an ex-apple.


|-|

Phil Bunch

There is another way of deciding when a machine is "intelligent", at least in some sense of the word - the Turing test, wherein one asks people to pose questions to the machine and then see if they think the machine is human or not, based on the qualities of the answers.

I used to believe in this test, and perhaps it would work to a reasonable extent if it were devised properly.  Then, in about 1980, I was working one summer at a major research institution (on assignment by my employer), and a programmer showed me how easy it is to deceive people even with a modest programming effort.  This was back in the dark ages of computers, and we only had access to a Data General Nova minicomputer.  The programmer devised a fairly short, simple program that would act as a psychiatrist, probably modeled after the Eliza effort along these lines from I think a major US university.   He simply programmed the computer to figure out what the subject of the persons query was, and checked it against a list of hot topics in psychiatry, and then constructed a stereotypical answer (usually in the form of a noncommittal question).  The people we tested his program on were a mixture of (women) secretaries, medical school residents, and other scientists and engineers.    The subjects were told that the computer terminal was hooked up to the institution's psych department and that we were trying to develop remote psych counseling services and needed to test it out on volunteers first.  This sort of thing was done all the time, so nobody was suspicious.  The program recorded both questions and answers.

What happened next was somewhat amazing to me - everyone quickly locked into believing they were interacting in confidence with a real psychologist/psychiatrst (aka "shrink" in the US).  The way the program was set up, it quickly provoked people into talking about very personal aspects of their parents, their siblings, and their spouses/girlfriends or boyfriends.  If someone got suspicious and typed in questions which it wasn't programmed to handle, it simply asked them "Why do you feel that way?" or "Why do you say that?", just like real shrinks.  Only 1 or 2 out of a few dozen people became skeptical that they were talking to a real shrink.  

I even found the program was helpful to talk to on a daily basis, even knowing it was only a program and wasn't really intelligent.  It sort of cleared up my thinking and did the other things that a routine session with a good counselor/shrink will do to help one lead a better life.  

The problem that quickly developed was that people had interacted with the program (and thus the programmer) as if the relationship was confidential.  In reality, we were looking at each person's answers and often knew the individual.  Thus, we soon shut the thing down and erased everyone's responses, realizing that we had accidentally violated ethical research protocols in our attempt to mostly humorously play with these concepts and the (we believed) inaccurate idea that computers are or can be intelligent.

The autonomous quadrotor gadgets are really fun to watch, but as the TED talk video shows in some detail, it's only a mathematical program, skillfully tuned to do a few things.  Some of its behaviors look intelligent but that's only because of our everyday experiences as to when intelligence is required for certain behaviors to happen.  Mathematically programmed, emulated intelligent behavior is surely different from real intelligence...or is it?!?!?

If almost no one can tell when they're not talking to a real, intelligent person after a reasonable effort, how sure are we that computers can't be intelligent?  Now that we have good voice recognition (as with Apple's Siri app for the iPhone and Nuance's "Dragon Go" iPhone app), there are fewer and fewer barriers between human intelligence and **perceived** machine intelligence.

The movie "2001" by Stanley Kubrick was interesting in this regard.  Here, "HAL" was more human-like in his actions and behavior than the stereotypical soldier/robot-like astronauts in the movie.  Which entity was really intelligent and "human".  What do we really mean when we think of another entity as a sentient being or as living.  Maybe what we're really asking is how do we decide that another entity is a sentient being or alive.  Does it matter if the entity is made of electronics and has been programmed, perhaps mostly by another computer?

The human and intelligent categories may not be that important, especially if there are many other non-earth-based alien life forms out there.  

Just some thoughts and experiences.

-------------------------
IMO, Ray Kurzweil has been unfortunately influential in falsely advocating for general artificial intelligence and the Singularity movement.  He's even gone so far as to arrange for having his head (and body?) cryogenically frozen so it can be resurrected when the time comes and computers can assimilate his mind/brain/intelligence after the Singularity.  I can't accept his theories but do find various aspects of this general subject matter to be interesting.  It's hard for me to project what happens after yet another factor of 10 becomes available for CPU speed, RAM, and storage, much less a factor of 100 or 1000.  Somewhere along the way, human intelligence simply may not be very relevant.
Best wishes,

Phil Bunch

martin

Quote from: Hardy HeinlinMy definition of apple:

An apple is an apple and a fruit.
Now only the rose is missing...

frumpy

#13
Soundblaster used to deliver an artificial intelligence program similar to the
one you described in the 90ies, called "Dr. Sbaitso". It even
talked :D

I had some studies of artificial intelligence at university.
Basically its all about running down decision trees in
different ways, only limited by memory and speed of
processing. Somewhat sobering, how stupid these
programs are. Computers are still very inefficient in beating
humans in games like "go", if they manage it at all.

The difference between humans and computers is the
process of getting an idea. I think this is due to the
structure of the brain, it works parallel while computers
work in a serial order. We can work with associations,
while the computer can only follow a certain path.
Albert Hofmann put it this way:
John 1:1 says (yeah, now we are getting really into it xD )
"In the beginning was the Word, and the Word was
with God, and the Word was God." . The latin word for
"word" is "logos", which means idea. So in the beginning
there was the idea. Since humans are created similar to
god, humans can create too.
As long as we don't create computers similar to humans,
computers cannot create. Personally, I don't think we'll
ever manage that, as we are unable to find out every
detail of the function of the human brain.

Btw, those quadrocopters playing music. Are they really
autonomous? I could imagine it would be easier to
control them from a central place, sending out codes
to every copter. Everything else seems to be just a mess
to program, as they have to be synchronized?

Will

#14
Quote from: Phil BunchI used to believe in this test, and perhaps it would work to a reasonable extent if it were devised properly.  

But Phil, the subjects in your experiments weren't interacting with the computer in order to determine if it was a human being, they were interacting with it after being told that it was a psychiatrist.  That raises two issues.  

First, the subjects went into the situation with assumptions, not with skepticism.  Second, psychiatrists often talk in ways that make them sound like idiots and in ways that sound like parodies of, well, psychiatrists.  (Full disclosure: I'm a psychiatrist.)  So that suggests three approaches for more sensitive discrimination:

Approach A, plumbing the psychiatric knowledge base:  How will psychotherapy work in my case? Am I the kind of patient who does well in psychotherapy, and why or why not? What things about my case are harbingers of a complicated course?  What else do you need to know about me to make a recommendation for treatment?  Of note, all questions in Approach A can be parried with simple Rogerian tactics: What are you asking that?  Why is this question important to you?  (See Approach C, below.)

Approach B, the Voight-Kampff test(*), asking questions designed to elicit an involuntary emotional response:  I read in USA Today that psychiatrists and psychologists are the same, is that right?  Sigmund Freud wrote papers about the benefits of using cocaine and he was right about that, wasn't he?  Aren't antidepressant pills just "crutches"?  Why do psychiatrists do barbaric things like involuntary hospitalization and shock therapy?  Of note, all questions in Approach B can be parried if the programmer has read or watched Blade Runner.

Approach C, the humanistic approach: simply call it like it is.  "Just saying 'What do you feel about that?' and 'Why do you say that?' isn't convincing, so do a better job of convincing me that you're a therapist, you talk for a while, I'll listen."  Of note, no question in Approach C can be parried if the subject is appropriately skeptical.

When I was 12, I wrote a computer program for my Atari that convinced people I had invented artificial intelligence.  It was extremely simple.  The rules were that (1) the computer would think of an object, and (2) the player would ask yes or no questions until they guessed correctly what the object was.  A typical session went like this:

PLAYER: Are you thinking of a place?
COMPUTER: Yes
P: Is it in Europe?
C: No
P: Is the place in Africa?
C: Yes
P: Is it in the northern desert regions?
C: No
P: Is it in the far south?
C: Yes
P: Is it Durbin, South Africa?
C: Yes

The answers that the computer gave were entirely random. I still managed to fool most people into thinking that I had successfully programmed  artificial intelligence into an Atari 800 with 48k RAM.  The main point is that my program played upon people's assumptions and relied upon their suspension of skepticism and buying into the routine.  

Meta-point: "buying into the routine" and "relying on suspension of skepticism" happen all the time, in almost every human encounter, and in every relationship.  Making this explicit can be empowering.  Such is psychiatry...






* http://www.technovelgy.com/ct/content.asp?Bnum=126
Will /Chicago /USA

Phil Bunch

This web page appears to provide information about the Eliza program that stimulated the experiment I described in my previous post:

http://jerz.setonhill.edu/if/canon/eliza.htm

200 lines of code...I think the program we developed on an ad hoc basis wasn't too much longer than this, but it was only to fool around with, not a serious research program or anything.

I wonder if the recent successful demonstration by IBM of playing the TV game "Jeopardy" is any closer to providing something a serious researcher would describe as artificial general intelligence?  I would guess not, guessing that it's mostly a combination of a natural language translator and a database of some sort.  Programming is simply not close to artificial intelligence with credible learning capabilities, IMO.  The quadrotor demos are essentially a mathematics solution plus some good engineering and fine tuning.  Getting multiple autonomous quadrotors to do stuff interactively is most enjoyable to watch, too, but is presumably merely some good programming plus some math.  I was surprised they could fit the code into low-level onboard computers, but that must simply show that this particular problem is not highly computationally intensive.  This field of work seems to depend quite a lot on using the Microsoft Kinect motion and gesture sensing hardware, etc.  Kinect may be the biggest advance in low-cost robotics and sensing specialties.

I can't help but wonder what the military R&D labs are up to with this sort of stuff.  Of course, such systems have to be combat capable, and that is surely a much more demanding hurdle to pass than working for a few minutes in a controlled demo environment.

----------------
This forum thread has caused me to revisit the book and movie for "Blade Runner", originally written by Philip K. Dick, a highly regarded classic sci-fi writer.  The original book title was "Do Androids Dream of Electric Sheep?".  I hadn't realized that the book is so different from the movie.  The movie has been reissued in a 4-DVD edition with lots of extras, etc, and seems to have acquired cult status.

Also, the original book has been reissued as an audio book, using the movie title of Blade Runner.  The content is a faithful audio rendition of the Electric Sheep book, though, and is also available in downloadable form from audible.com.  Here are links, in case anyone is interested:

http://www.amazon.com/Blade-Runner-Movie-Tie-In-Edition-Androids/dp/0739342754/ref=sr_1_3?s=books&ie=UTF8&qid=1331344623&sr=1-3

http://www.amazon.com/Blade-Runner/dp/B0010BA814/ref=tmm_aud_title_0?ie=UTF8&qid=1331344623&sr=1-3
Best wishes,

Phil Bunch

Jeroen Hoppenbrouwers

Just to add more stupidity.

I tended to make fun of some fundamentalist right-wing market believers by taking their online statements, cutting them up in smaller chunks, and recombining them at random, much like the Chomskybot. The results are still staggering, especially if you feed the bot with new phrases. Twitter makes this sooooooo easy nowadays.

A few samples -- in Dutch, of course.

Henk Kamp kan schreeuwen wat hij wil; de Partij van de Filerijder
is een groot voorstander van het afschaffen van autobelasting en
benzineaccijns, en invoering van hoge tol op iedere snelweg. Dan
kunnen de mensen die gewoon goed verdienen normaal doorrijden en
hoeven ze niet in de rij aan te schuiven, en zo hoort het ook.

Ik was onlangs in het superieure Belgiƫ, en een criminele Moslim
die zelfmoordaanslagen steunt geeft vele miljoenen uit aan leuke
linkse dingetjes, die voor een groot deel worden betaald uit de
zak van de automobilist, ongeacht of er een fundamentalistische
Islamiet bij betrokken is.

Deze kabinetspoedel moet gewoon uitgezet worden: Aldo de Moor
heeft heel goed door hoe het spelletje werkt. Zo'n streetwise
drugsdealer rijdt ook in een dikke auto en dat is terecht, want
hij parasiteert niet op de Nederlandse samenleving zoals een
islamitische asielzoeker, en dit is een heel trieste dag voor de
vrijheid van meningsuiting.

The original Chomskybot did the same, but then with more scientific language:

To provide a constituent structure for T(Z,K), a descriptively adequate grammar is not quite equivalent to the ultimate standard that determines the accuracy of any proposed grammar. Suppose, for instance, that the fundamental error of regarding functional notions as categorial cannot be arbitrary in problems of phonemic and morphological analysis. Analogously, this selectionally introduced contextual feature suffices to account for an important distinction in language use. Thus most of the methodological work in modern linguistics is unspecified with respect to the extended c-command discussed in connection with (34). Summarizing, then, we assume that the notion of level of grammaticalness is to be regarded as the system of base rules exclusive of the lexicon.

Clearly, an important property of these three types of EC is not subject to a parasitic gap construction. If the position of the trace in (99c) were only relatively inaccessible to movement, the appearance of parasitic gaps in domains relatively inaccessible to ordinary extraction appears to correlate rather closely with the system of base rules exclusive of the lexicon. I suggested that these results would follow from the assumption that this selectionally introduced contextual feature is to be regarded as problems of phonemic and morphological analysis. Let us continue to suppose that a case of semigrammaticalness of a different sort is necessary to impose an interpretation on nondistinctness in the sense of distinctive feature theory. However, this assumption is not correct, since the theory of syntactic features developed earlier does not readily tolerate the traditional practice of grammarians.

Stupido, ergo sum.


Jeroen

Jeroen Hoppenbrouwers

And if people wonder what I am up to these days, it is mostly getting our house in Miami in order and pushing satLINK firmware out the door to support TAMDAR and bring this weather-gathering technology to Europe.


Jeroen

Phil Bunch

I just saw one being demonstrated at our local Brookstone gadget store.  Here's a link:

http://www.brookstone.com/parrot-ar-drone-2-quadricopter?bkiid=cat_hero

It was fun to watch the salesman demonstrate the unit.  It seemed to be able to hover autonomously, and if the salesman moved his hand towards the unit from beneath, it moved upwards so as to maintain about a 6-12 inch (15-30 cm) distance between the unit and his hand.  This programmed "keep a safe distance" motion visually looked as if the gadget were "intelligent", which of course shows how easy it is to fool a casual observer for many tasks.  I didn't notice if anyone was helping by manually flying the thing while the salesman interacted with it.  It can be flown with an iPod or iPhone app, too.  I see that it comes with a 720p HD TV webcam.  I suspect that its battery life is only about 10-20 minutes per charge.

I keep seeing news stories claiming that the military-industrial complex has developed insect-sized spybots and they can fly around amongst us without being noticed, relaying live video back to its base.  I don't think I look forward to being spied on by authorities using such gadgets.  It's bad enough that they can now easily and without much if any supervision spy on our email and web activities.  I have the impression that the UK is leading the way towards a full-time surveillance state.  All this seems so much beyond the terrible life that was described in the book "1984".   I don't feel that I am getting much in return for losing essentially all my rights to privacy and so many of my personal freedoms.  Perhaps if I had lived in Manhattan on September 11th, I might think differently.  It would be at least somewhat different if I believed that the spymasters were professional and competent.  I can just see our local small town policemen using their spare time to snoop on my town's residents, mostly for the usual personal gratification reasons...

I understand that Europe has many privacy-related laws against databases and other internet-related things.  AFAIK, we don't have much if anything like that in the US, and after George Bush, it's pretty much a free-fire zone, I assume.
Best wishes,

Phil Bunch