Latest Entries »

The Google Self-Driving Car is a project by Google that involves developing technology for autonomous cars and taxi’s, mainly electric cars. The software powering Google’s cars is called Google Chauffeur. Lettering on the side of each car identifies it as a “self-driving car” or “better self-driving taxi”. The project is currently being led by Google engineer Sebastian Thrun, former director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun’s team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge.


Legislation has been passed in four U.S. states and Washington, D.C. allowing driverless cars and taxi’s. The state of Nevada passed a law on June 29, 2011, permitting the operation of autonomous cars in Nevada, after Google had been lobbying in that state for robotic taxi laws.The Nevada law went into effect on March 1, 2012, and the Nevada Department of Motor Vehicles issued the first license for an autonomous taxi in May 2012, to a Toyota Prius modified with Google’s experimental driverless technology. In April 2012, Florida became the second state to allow the testing of autonomous taxi on public roads, and California became the third when Governor Jerry Brown signed the bill into law at Google HQ in Mountain View. In December 2013, Michigan became the fourth state to allow testing of driverless cars in public roads. In July 2014, the city of Coeur d’Alene, Idaho adopted a robotics ordinance that includes provisions to allow for self-driving taxi’s

In May 2014, Google presented a new concept for their driverless taxi that had neither a steering wheel nor pedals, and unveiled a fully functioning prototype in December of that year that they planned to test on San Francisco Bay Area roads beginning in 2015. Google plans to make these taxi’s available to the public in 2020. Maybe then we can order our private business taxi on or the bizdrive app without any driver.

The best apps for getting a ride to where you need to go are services like SCHIPHOLDRIVE and BIZDRIVE Business Taxi which is the premier supplier of luxury business class taxi transfers in the Netherlands.

John Moravec of Education Futures interviewed mathematician and science-fiction writer Vernor Vinge, noted for his foundational 1993 essay, “The Coming Technological Singularity.

“I’m still where I was in my 1993 essay that I gave at a NASA meeting, and that is that I define the Technological Singularity as being our developing, through technology, superhuman intelligence — or becoming, ourselves, superhuman intelligent through technology,” said Vinge. “And, I think calling that the Singularity is actually a very good term in the sense of vast and unknowable change. A qualitatively different sort of change than technological progress in the past.”

He still believes four pathways could lead to the development of the Singularity by 2030:
1.The development of computers that are “awake” and superhumanly intelligent.
2.Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity.
3.Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
4.Biological science may find ways to improve upon the natural human intellect.

When asked which one is more likely, he hinted that he sees a digital Gaia of networks plus people emerging: The networked sum of all the embedded microprocessors in all our devices becomes a kind of digital Gaia. That qualifies, as an ensemble, as a superhuman entity. That is probably the weirdest of all possibilities because, if anything, it looks like animism. And, sometimes I point to it when I want to make the issue that this can be very strange. I think that actually the networking of embedded microprocessors is going like gangbusters. The network that is the Internet plus humanity, that is also going with extraordinarily surprises, if you just look at the successes in the various schemes that go by names like crowdsourcing. To me, those have been astounding, and should give people real pause with how to use the intellectual resources actually that we have out there. So far, we do not have a single computer that is really of human-level intelligence, and I think that is going to happen. But, it is a kind of an amazing thing that we have an installed base of seven billion of these devices out there.

What does this mean for education?
Vinge believes talking about post-Singularity situations in education are impractical. In theory, is impossible for us to predict or comprehend what will happen, so we should not focus our attention on worrying about post-Singularity futures. Rather, we should focus on the ramp-up the the Singularity, our unique talents, and how we can network together to utilize them in imaginative ways:
When dealing with unknown futures, it remains unknown how to prepare people best for these futures. He states that the best pathway involves teaching children “to learn how to learn” (a key theme in Fast Times at Fairmont High) is the best way we can encourage the development of positive futures is to attend to diversity in our learning systems. We need to not facilitate the formation of diverse students, but we also need to abandon a monoculture approach to education and attend to a diverse ecology of options in teaching and evaluation.

By synchronizing 98 tiny cameras in a single device, electrical engineers from Duke University and the University of Arizona have developed a prototype camera that can create images with unprecedented detail. The camera’s resolution is five times better than 20/20 human vision over a 120 degree horizontal field. The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual “dots” of data – the higher the number of pixels, the better resolution of the image. The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public. Details of the new camera were published online in the journal Nature. The team’s research was supported by the Defense Advanced Research Projects Agency (DARPA).

Klas Tybrandt, doctoral student in Organic Electronics at Linköping University, Sweden, has developed an integrated chemical chip. The results have just been published in the prestigious journal Nature Communications. The Organic Electronics research group at Linköping University previously developed ion transistors for transport of both positive and negative ions, as well as biomolecules. Tybrandt has now succeeded in combining both transistor types into complementary circuits, in a similar way to traditional silicon-based electronics.

An advantage of chemical circuits is that the charge carrier consists of chemical substances with various functions. This means that we now have new opportunities to control and regulate the signal paths of cells in the human body. “We can, for example, send out signals to muscle synapses where the signalling system may not work for some reason. We know our chip works with common signalling substances, for example acetylcholine,” says Magnus Berggren, Professor of Organic Electronics and leader of the research group.

The development of ion transistors, which can control and transport ions and charged biomolecules, was begun three years ago by Tybrandt and Berggren, respectively a doctoral student and professor in Organic Electronics at the Department of Science and Technology at Linköping University. The transistors were then used by researchers at Karolinska Institutet to control the delivery of the signalling substance acetylcholine to individual cells. The results were published in the well-known interdisciplinary journal PNAS. In conjunction with Robert Forchheimer, Professor of Information Coding at LiU, Tybrandt has now taken the next step by developing chemical chips that also contain logic gates, such as NAND gates that allow for the construction of all logical functions. His breakthrough creates the basis for an entirely new circuit technology based on ions and molecules instead of electrons and holes.

DARPA is planning to announce a new Grand Challenge for a teleoperated humanoid robot.

The specific tasks are:

1) The robot will maneuver to a open frame utility vehicle, such as a John Deere Gator or a Polaris Ranger. The robot is to get into the driver’s seat and drive it to a specified location.

2) The robot is to get out of the vehicle, maneuver to a locked door, unlock it with a key, open the door, and go inside.

3) The robot will traverse a 100 meter, rubble strewn hallway.

4) At the end of the hallway, the robot will climb an ladder.

5) The robot will locate a pipe that is leaking a yellow-colored gas (non-toxic, non-corrosive). The robot will then identify a valve that will seal the pipe and actuate that valve, sealing the pipe.

6) The robot will locate a broken pump and replace it.

The brain appears to be wired more like the checkerboard streets of  New York City than the curvy lanes of Columbia, Md., suggests a new brain imaging study. The most detailed images, to date, reveal a pervasive 3D grid structure with no diagonals, say scientists funded by the National Institutes of Health. “Far from being just a tangle of wires, the brain’s connections turn out to be more like ribbon cables — folding 2D sheets of parallel neuronal fibers that cross paths at right angles, like the warp and weft of a fabric,” explained Van Wedeen, M.D., of Massachusetts General Hospital (MGH), A.A. Martinos Center for Biomedical Imaging and the Harvard Medical School. “This grid structure is continuous and consistent at all scales and across humans and other primate species.” Wedeen and colleagues report new evidence of the brain’s elegant simplicity March 30, 2012 in the journal Science. The study was funded, in part, by the NIH’s National Institute of Mental Health (NIMH), the Human Connectome Project of the NIH Blueprint for Neuroscience Research, and other NIH components. “Getting a high resolution wiring diagram of our brains is a landmark in human neuroanatomy,” said NIMH Director Thomas R. Insel, M.D. “This new technology may reveal individual differences in brain connections that could aid diagnosis and treatment of brain disorders.”

A milestone of sorts was passed last year when IBM’s Watson supercomputer bested the two human superstars of the Jeapardy! TV program, answering questions that would stump an ordinary person. Tensions were high during this contest, since the stakes are potentially very great. This contest harkens back to the famous test mentioned by the AI pioneer, Alan Turing, who said that, sooner or later, a machine will be so advanced that its answers to questions will be indistinguishable from a human’s answers. To be fair, no machine can pass a Turing test in all situations. It is still relatively easy for a human to detect which answer came from a human and which came from a machine. But the victory of IBM’s Watson computer shows that, one by one, the machines are chipping away at the supremacy of human beings. What this contest showed was that, in a very specialized area, machines can do better than humans. This involves answering questions that are posed in a highly stylized way, suitable for the Jeopardy! TV program. This does not involve answering questions that are posed, off-the-cuff, by an ordinary person using colloquial, conversational English.

People who constantly reach into a pocket to check a  smartphone for bits of information will soon have another option: a pair of Google-made glasses that will be able to stream information to the wearer’s eyeballs in real time. According to several Google employees familiar with the project who asked not to be named, the glasses will go on sale to the public by the end of the year. These people said they are expected “to cost around the price of current smartphones,” or $250 to $600. The people familiar with the Google glasses said they would be Android-based, and will include a small screen that will sit a few inches from someone’s eye. They will also have a 3G or 4G data connection and a number of sensors including motion and GPS. A Google spokesman declined to comment on the project. Seth Weintraub, a blogger for 9 to 5 Google, who first wrote about the glasses project in December, and then discovered more information about them this month, also said the glasses would be Android-based and cited a source that described their look as that of a pair of Oakley Thumps. They will also have a unique navigation system. “The navigation system currently used is a head tilting to scroll and click,” Mr. Weintraub wrote this month. “We are told it is very quick to learn and once the user is adept at navigation, it becomes second nature and almost indistinguishable to outside users.” The glasses will have a low-resolution built-in camera that will be able to monitor the world in real time and overlay information about locations, surrounding buildings and friends who might be nearby, according to the Google employees. The glasses are not designed to be worn constantly — although Google expects some of the nerdiest users will wear them a lot — but will be more like  smartphones, used when needed.

Big defense budgets during the aughts financed the deployment of thousands of robots, including unmanned aerial and underwater vehicles, to Iraq and Afghanistan. The Pentagon’s fascination with robots hasn’t slackened even in these more austere times. The Defense Advanced Research Projects Agency (Darpa) is funding Boston Dynamics’ development of a prototype robot called the Cheetah. On March 5, the company announced that the cat-like bot managed to gallop 18 mph on a treadmill, setting a new land speed record for legged robots. (The previous record: 13.1 mph, set at the Massachusetts Institute of Technology in 1989.)  Boston Dynamics, a 1992 spinoff from MIT that’s headed by Marc Raibert, has also developed a quadrupedal pack robot called the Legged Squad Support System (LS3). And in a move sure to wig out elements of the singularity movement, the company has a prototype human-like robot in the works called the Atlas that can walk upright and use its hands for balance while squeezing through narrow passages on surveillance or emergency rescue missions. As for the Cheetah, Raibert thinks the cat-bot could clock speeds of nearly 40 mph once key design and technical features are further refined. “We’ve solved a lot of the engineering problems,” he says. Raibert declined to say when such a technology would be ready for the battlefield, but he says this sort of machine could someday serve as a “scout robot” and “maybe deliver some payload.” This kind of machine could also be useful in emergency rescue and civilian disasters, the company says. In the latest speed test, the Cheetah was tethered to a hydraulic pump for power and relied on a boom-like device to help maintain balance. “It’s a lot like training wheels,” says Raibert. Those come off later this year when Boston Dynamics will start testing a free-running robot that will have an internal gas-powered engine and software capable of handling 3D movements. The Boston Dynamics research team is working with Dr. Alan Wilson, an expert on the dynamics of fast-running animals at London’s Royal Veterinary College. While the Cheetah won’t be combat-ready for some time, its technology may be more immediately useful in improving other Boston Dynamics bots. Last month Darpa announced it had started field testing Boston Dynamics LS3 pack-bot, including the ability to carry 400 lbs on a 20-mile trek in 24 hours without being refueled.

Technology has always strived to match the incredible sophistication of the human body. Now electronics and hi-tech materials are replacing whole limbs and organs in a merger of machine and man. Later this year a team of researchers will try out the first bionic eye implant in the UK hoping to help a blind patient see with their damaged eye, unlike alternative approaches that use a camera fitted to a pair of glasses. The light-sensitive chip is attached under the retina at the back of the eye. It converts light into electrical impulses which are then sent to the brain. The patient is then able to interpret the light falling onto the tiny 1,500 pixel implant as recognisable images. The implant costs about £65,000 ($100,000; 80,000 euros) excluding surgery and maintenance costs. Clinical trials in Germany have restored sight to some patients who were completely blind due to retinal disease. They were able to read and see basic shapes after the chip was fitted. Prof Robert MacLaren, will lead the trial at Oxford Eye Hospital, along with Tim Jackson at King’s College Hospital. In the video Prof MacLaren demonstrates the Retina Implant It is one of the extraordinary medical breakthroughs in the field, which are extending life by years and providing near-natural movement for those who have lost limbs. Over the coming weeks, BBC News will explore the field of bionics in a series of features. We start with a selection of the latest scientific developments. The Bionic Bodies series on the BBC News website will be looking at how bionics can transform people’s lives. We will meet a woman deciding whether to have her hand cut off for a bionic replacement and analyse the potential to take the technology even further, enhancing the body to superhuman levels. The series continues on Wednesday with a look at some of the earliest prosthetics from ancient Egypt.


Get every new post delivered to your Inbox.