Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Friday, December 2, 2016

Google software improves its own translation process

                                                                           [photo Manuel Burgos/Getty]

Click here to read a New Scientist article about an improved approach to automatic translation... invented spontaneously by the AI system itself [artificial intelligence] that had been handling this activity. That’s a fascinating idea. An AI designs its own approach enabling it to do a better job.

Monday, September 12, 2016

Google's latest voice is not bad at all

Click here to access a short French-language article about Google's latest achievements in synthetic voices. Samples start with well-chosen words: "aspects of the sublime".

Thursday, March 10, 2016

As they say in French, the carrots are cooked

That nice old-fashioned French saying designates a situation in which failure is just around the corner. And that's the current situation of the Korean Go player Lee Sedol in his match against an AI (artificial intelligence) known as AlphaGo. In the following photo, Lee is on the right, whereas the fellow in front of him has the job of carrying out the moves requested by the AI opponent.

Well, after two matches, the AI has defeated Lee Sedol in both games. So, the AI only needs to clinch one more game to win the tournament.

Needless to say (although I insist upon making this point, without attempting to go into details), this man/machine competition is more exciting and intellectually meaningful than the recent competitions involving a question-answering AI from IBM known as Watson.

Wednesday, February 17, 2016

Who's the American presidential candidate called Watson?

Click here to access a website that's designed to promote an American presidential candidate named Watson. You'll soon discover (if you don't know already) that this Watson candidate happens to be an AI: that's to say, an artificial intelligence. In other words, of a more down-to-earth nature, Watson is merely a complex piece of computer software, capable of tackling problems (answering questions, above all) that used to be handled exclusively by bright human beings.

This AI software became famous when it succeeded in defeating a human contestant to win America's favorite quiz show, Jeopardy.

Since then, there has been a steady US buzz of superlatives aimed at convincing the people of the world (well, let's say, the people of God's Own Country) that this software tool is... well, awesome.

Personally, I got to know IBM quite well, having started my professional career in programming with that company in Australia, in the years 1957 to 1961, before working with their programming teams in Paris and London, in 1962 and 1963. Since then, I've also become quite familiar with the field of artificial intelligence. Well, in my humble opinion, much of what we hear from IBM as far as AI is concerned can be brushed aside as pure marketing buzz, business-oriented hype.

Tuesday, January 26, 2016

Marvin Minsky has left us

Marvin Minsky [1927-2016]

My first discussion with Marvin Minsky took place in the grounds of MIT (Massachusetts Institute of Technology) at Cambridge (USA) in 1971, when I was looking up individuals who might be prepared to participate in a TV project that I envisaged for my French employer, the Service de la Recherche de l'ORTF. In fact, Minsky never agreed to be filmed in the context of my project. Several years later, Minsky and his wife dropped in for lunch at my place in Paris.

Click here to see an article on Minsky in the The New York Times.

Tuesday, October 8, 2013

Robot update

The Atlas robot, being developed by Boston Dynamics, measures 1m88, weighs 150 kilos and can use its stereoscopic vision to move around on rough and uneven surfaces. It can withstand shocks from a pendulum weighing 20 kilos, and balance on one leg.

Although US military funding is being used to develop the Atlas robot, we are assured that the machine will not be put into service as an infantry unit. That sounds reasonable, in that the machine would be highly vulnerable to the simplest gun/grenade attack. On the other hand, a robot such as this would be an extraordinary device in the context, say, of a catastrophe such as that of Fukushima.

Here's the four-legged Wildcat robot, also being developed by Boston Dynamics, which is a descendant of the Cheetah sprinter that I presented in an earlier blog post [display].

It's a pity that its "head" appears to be where its "buttocks" should be located, and vice versa. What impresses me most of all is Wildcat's ability to either bound or gallop. In any case, I'm convinced that Wildcat would be a fabulous friend for my dog Fitzroy, on the slopes of Gamone.

Thursday, September 13, 2012

Big robotic hounds

In my blog post of 8 September 2012 entitled Robotic runner [display], we saw a legged robot named Cheetah breaking a speed record on a laboratory treadmill.

The same DARPA organization [Defense Advanced Research Projects Agency] proposes the following spectacular video which presents field testing of their Legged Squad Support System (LS3).

I'm frankly terrified by the idea of such big robotic hounds roaming around out in the wilds. We must realize that DARPA isn't designing these metallic creatures as toys. This is pure military research. They say that the big beasts might be used as pack animals, to carry stuff that would normally be borne by human soldiers. But a robot that can transport military gear can also carry a machine gun. They could be trained to operate in a hunt-and-kill style, while being commanded at close range by vocal orders.

We've had glimpses recently of the terrifying efficiency of unmanned drones. Just imagine what a military confrontation might look like if the attacker were to deploy a mixture of airborne drones and legged ground robots. I have the impression that we're hurtling into a crazy science-fiction universe, in which battles will be fought by 5-star game-playing generals located far from the killing grounds, maybe in luxurious bunkers.

Wednesday, October 26, 2011

Pioneer in artificial intelligence

John McCarthy, 84, died in his sleep last Sunday evening.

Computer programmers of my generation who became interested in artificial intelligence (an expression coined by McCarthy in 1955) usually tackled the rudiments of the LISP language (developed by McCarthy) by means of this slim blue book:

John McCarthy was one of the experts whom I interviewed for my 52-minute TV documentary on artificial intelligence that was broadcast in France on 25 June 1972.

This documentary is housed in the archives of the French Institut national de l'audiovisuel. Click the banner to access the website page describing the documentary.

During the shooting of the documentary, I visited SAIL [Stanford AI Laboratory] out in the hills of Palo Alto, where McCarthy and his team were working on the creation of robot arms, seen in the background of the following photo:

I recall that their most advanced arm was being taught how to build a brick wall. It used a video camera to obtain feedback on the state of advancement of the wall, and to verify that none of its bricks had been wrongly placed. In a corner of the laboratory, the robot arm, its camera and the bricks were enclosed in a kind of plexiglass greenhouse, while the computers were located on the outside. McCarthy told me that testing a robot arm could be a dangerous activity. Program bugs were capable of causing the arm to pick up bricks and start throwing them at the programmers and their computers.

At that time, McCarthy and his colleagues were developing one of the world's first autonomous robotic vehicles. It looked like a kid's cart. Visitors arriving by car at SAIL would often be surprised to find themselves sharing the road with this slow-moving vehicle, which spent its time learning how to wander around on its own through the grounds of the laboratory without running off the road.

Friday, June 24, 2011

Curious seventh singer

Some of my readers are likely to wonder whether I found this story by hanging around sleazily on websites about Japanese adolescents. In fact, it was a tweet from the British New Scientist magazine that provided me with the initial link, since the technological feat in question is quite astonishing, along with its artistic and cultural repercussions.

That's a photo of the seven members of a Japanese girls' band named AKB48. In the middle, you have the lead singer, named ‪Eguchi Aimi‬, whose harmonious facial features can be admired in this portrait:

For a while, the group was composed of only six girls. Then they were joined by Eguchi Aimi, and one of the first performances of the enlarged group was a video ad for candy, seen here:

Fans of the AKB48 group were recently flabbergasted to learn that the charming lead singer ‪Eguchi Aimi‬ is in fact, not a real human being, but rather a synthesized screen-only creation. In other words, a virtual singer. But the most amazing thing of all is the way in which this artificial singer was assembled. The design team "borrowed" features from each of the real singers, and then scrambled them all together to give birth to ‪Eguchi Aimi‬. For example, the eyes of Eguchi (on the left) come from the real-life young lady on the right:
Eguchi's sensuous mouth has been taken from another member of the group:

Her nose comes from yet another genuine singer:

Here's a fascinating video that provides you with a taste of ‪Eguchi Aimi‬'s talents as a performer, while showing you briefly how she was put together:


In any case, she's an attractive girl, she sings quite well (using God only knows whose voice), and she's certainly a natural seventh member of the group. If Eguchi Aimi didn't exist, it would surely be a good idea to invent her…

CORRECTION: Since writing this blog post, I've discovered that AKB48 is not simply a small girls' band, as I mistakenly imagined, but an entire cabaret company of some 60 performers, with their own theater in Tokyo. The Japanese are so well-behaved that no Japanese cabaret audience would ever dream of standing up and crying out for a live on-stage appearance of Eguchi Aimi. Fortunately...

Is seeing believing?

Let's see if you can successfully guess the nature of an individual by simply examining a portrait.

Don't click this photo yet. Wait until I've provided a few trivial explanations. I'll let you know when you should click the photo. This woman's name is Aude Oliva. Try to guess what sort of a person she is. For example, what sort of work might Aude do to earn her living?

Now, look closely at the following picture:

As you can see, Aude Oliva actually created this artwork. (Don't jump to the conclusion, though, that Aude is simply a talented graphic artist with Photoshop experience.) Most viewers will agree, I would imagine, that it seems to be a portrait of Albert Einstein. OK? Now, leave this Einstein photo sitting on your computer screen, get up from your chair, and move back about a meter from your computer screen. Does the portrait still appear to be that of Einstein?

If you click on the portrait of Aude Oliva, you'll find her professional titles: Associate Professor in the Department of Brain and Cognitive Sciences and a Principal Investigator in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. And you'll be able to find details about the spectacular so-called hybrid images that she creates.

Wednesday, January 26, 2011

Centenary of a computing giant

These days, we hear a lot about the achievements of Apple. I'm unlikely to complain about that, of course, because I've always been totally addicted to the products of Cupertino, from back at the time I wrote my first book about the Mac, in 1984, and even before then, at the pioneering epoch of the Apple II computer.

In the midst of all the talk about the marvelous creations of Steve Jobs, we must never forget, however, that the Big Daddy of computing has always remained a celebrated US corporation that made a name for itself by selling so-called "business machines" on an international scale.

In 2011, the company will be turning 100, which means that it was born in the same year as Tennessee Williams, Ronald Reagan and France's Georges Pompidou. I joined IBM in Sydney towards the end of 1957, and worked as a computer programmer using the Fortran language on a vacuum-tube machine called the IBM 650, whose central memory was housed on a revolving magnetically-coated drum.

The new IBM website designed to celebrate the centenary includes an interesting video on the second-generation transistorized computer that came next: the IBM 1401, seen here in an old marketing photo:

This was the machine I was programming (in a macro-assembler language called Autocoder) at the time I arrived in Paris, in 1962, and started to work at the European headquarters of IBM. Click the above photo to see the video concerning this machine, which shows various former IBMers of my generation.

These days, IBM has embarked upon a colossal computer challenge in the domain of artificial intelligence. Known as Watson (the name of the founder of IBM), this project aims to get a computer to perform better than human beings in the American TV game called Jeopardy! The system, based upon so-called massively-parallel probabilistic evidence-based architecture, incorporates a vast array of big boxes that have much the same external aspect as the units of an archaic IBM 1401… but you can be sure they do more things!

Click the photo to visit IBM's website on their fabulous Watson project.

AFTERTHOUGHT: It's good, in a way, that IBM has been somewhat out of the limelight for many years, compared to companies such as Microsoft, Apple and Google. That has enabled IBM to move ahead quietly and constantly in a field such as artificial intelligence without too much media interference. But this situation is likely to change in a spectacular fashion as soon as Watson starts to bare its teeth… which is exactly what's happening at this very moment. Personally, I would not hesitate for a moment in declaring that a project such as Watson represents one of the greatest human challenges of all time: the invention of a deus ex machina that seems to be approaching the spirit of the famous IBM slogan.

I used to dream about that challenge back in the early '70s, when I was making a series of documentaries on this subject in the USA, for French TV, and writing my book on artificial intelligence.

And I still do, today, more than ever… particularly since scholars such as and Richard Dawkins and Steven Pinker have convinced me that we human beings are "merely" a special kind of machine, imbued with a strange property (not yet understood, of course) referred to as consciousness.

ANECDOTE: You might wonder why software engineers at Google and elsewhere have been scanning vast libraries of books of all kinds, and making them freely available to researchers. Are the corporations and engineers doing this because they want to offer more and more reading material, philanthropically, to old-timers such as you and me? Don't be naive! They're building those vast digital libraries for readers of a new kind: future generations of intelligent computers.

BREAKING NEWS: Stephen Wolfram, in his blog [display], seems to believe that IBM's Watson will win the forthcoming Jeopardy TV event. Moreover, he is encouraging IBM… even though their Watson is a competitor of his own approach: the so-called Wolfram-Alpha system.

Friday, May 21, 2010

Lego life

Back at the time I became interested in so-called artificial intelligence, and started work on my book entitled Machina Sapiens, I did not imagine that the most direct link between computing concepts (such as programming) and human beings would be established, not through attempting to simulate what we think of as intelligence, but rather by synthesizing life itself. Craig Venter has just made a gigantic breakthrough in this domain.

Obviously, this activity is Promethean. Man is starting to play with the fire of the gods, and nobody knows where this work will lead. But it's unthinkable that it could be halted.

Friday, February 5, 2010

Tomorrow's computing concepts

Many years ago, when I was visiting the MIT (Massachusetts Institute of Technology) in Boston for French TV, I recall meeting up with a young guy on the staff of their AI (artificial intelligence) group who was apparently paid to do little more than dream up ideas of a science-fiction kind about the future of computing. This friendly one-man think tank gave me a copy of his latest paper, which was a lengthy list of possible inventions, described with an abundance of freshly-coined technical words and abstract philosophical expressions. I remember that he used the AI acronym as a noun, designating what most people at that time would have called a robot. Apart from that, though, little else in his futuristic wish-list was within my conceptual grasp.

Apparently this tradition still exists at MIT. Yesterday, my friend Brahim Djioua (himself an AI researcher at the Sorbonne) sent me a link to a fascinating video about a visionary fellow named Pranav Mistry, a graduate of the IIT (Indian Institute of Technology) who went on, while working on his doctorate at MIT, to dream up a fabulous approach to computing as it might exist in the near future. The following video speaks for itself, since Pranav has actually implemented many of his dreams in the form of real devices, but you may have to watch the video several times (as I did) for the astonishing messages to get through clearly.

Wednesday, April 30, 2008

Understanding pictures

If you submitted the above pair of images to a computer equipped for visual processing, the machine should be able to determine that the thing on the left is a hedgehog, whereas the object on the right is a wire brush. Initially, the computer would exploit the hints provided by the image captions, "animal" and "tool". Then it would search through its databases to examine hypotheses about various wiry-looking animals and wiry-looking tools. But suppose we were to submit the images without any captions, as follows:

In this case, the computer would no doubt find it much more difficult to identify correctly the two images.

The point I'm trying to make is that computers have a hard job trying to perform various tasks that are trivially easy for humans. And tasks that depend solely on visual cues, with no linguistic hints whatsoever, are particularly difficult for the computer. A simple glance informs us that the thing on the left looks like an animal, in that the dark "holes" are surely eyes, and the rectangular bit that protrudes in the foreground is surely a snout. As for the object on the right, its sharply-defined contours reveal instantly that it's a manufactured artifact. However, it still remains an extremely arduous task to try to instill this kind of common-sense visual approach in computers.

As a child, I used to see my father shaving with an old-fashioned steel razor. One day, while being driven through the outback countryside by my grandparents, I saw a hillside whose trees had suffered recently in a bushfire, and I had the sudden impression that the dark stumps could be likened to my father's face when he was in need of a shave. I imagined that it might be possible to attach a giant steel razor to our automobile, enabling us to shave down the burnt trees. Some people might say kindly that I had a vivid imagination. In fact, I was reacting like a poorly-programmed computer, incapable of making instantly a clear distinction between hedgehogs and wire brushes. Since evolving into an experienced adult (?), I'm no longer inclined to associate an unshaved face with a hillside of scorched tree stumps. You might say that my childhood power of imagination has disappeared. On the other hand, I don't confuse hedgehogs with wire brushes.

When I started work in the research department of the ORTF [French Broadcasting System] in 1970, a TV producer asked me whether it would be possible to develop a computer program enabling the machine to "watch" old movie sequences, for days and nights on end, with the aim of extracting all the top-quality images, say, of trees. The fellow was concerned with the use of TV as an educational medium in certain African nations, and he felt that tons of existing images could be recycled to make excellent documentaries for African audiences. I disappointed him by pointing out that a computer would be incapable of separating images of people into males and females, so it was premature to talk about software capable of selecting attractive images of trees.

Google has been working for a long time on image-recognition algorithms, and two of their engineers have just presented a paper on this subject at a web conference in Beijing. Their experimental tool named VisualRank attempts to weight and rank web images that look alike... in much the same way that the familiar Google tool weights and ranks websites that would appear to talk about a given subject. It goes without saying that the practical exploitation of such a tool would be immense and profitable. Users interested in a certain kind of object or article (such as clothes they would like to purchase, for example) could expect to start with a rough visual outline of their goal, and go on to access pictures of relevant items supplied by the search engine.

I have the impression however that Google still has a long way to go before reaching that point. So, if you intend to use the web to purchase either a hedgehog or a wire brush, be wary about supplying nothing more than a vague image of what you're looking for. For the moment, it would be wiser to write down in words exactly you want.

Monday, September 10, 2007


I'm amused to see the extent to which the buzzword "singularity" has gained ground in recent years. When I was a student, singularity was a rather ordinary mathematical concept. Roughly speaking, if a mathematical function behaved normally except for certain particular values of its arguments, these special cases were designated as singularities. Then the word was applied to theoretical situations in which the normal laws of physics break down. The most famous case of a so-called space-time singularity occurs within black holes.

More recently, the word "singularity" has been used to designate an advanced case of AI [artificial intelligence], namely an ultraintelligent machine. If AI researchers were indeed capable of designing a machine that happened to be more intelligent, in general, than the brightest humans [which is a situation that has never yet arisen in practice], then we might expect this smart machine to be smarter than humans in various engineering tasks. Among other challenges, that machine could turn out to be extremely talented in the art of designing even smarter machines... which might give rise to a snowball effect. And the end result could well be a vastly intelligent machine of the kind referred to as a singularity.

A colloquium on this theme, called the Singularity Summit, has just been organized by the Singularity Institute for Artificial Intelligence in Palo Alto, California [location of Stanford University].

Many singularist believers predict that technological progress is accelerating at such a rate that ultraintelligent machines are just around the corner. Detractors, on the other hand, claim that the AI singularity concept is no more than harmless garden-variety science fiction. As for me, although I have the retrospective impression that AI research [which once interested me greatly] ran into a brick wall a couple of decades ago, I must say that the power of computing amazes me today in ways that I would never have imagined not so long ago. Consequently, I'm ready for anything.