Google have demurely stepped back into the world of robotics in a reboot of a programme which was originally launched in 2013.

Andy Rubin, VP of engineering at Google, named the project ‘Replicant’ in a nod to 80’s Sci-Fi blockbuster, Blade Runner. The ambitious and troubled venture closed its shutters in 2018, shortly after Rubin left the company amid sexual harassment accusations. Parts of the experimental Google arm were auctioned off, with a number of assets sold to Japanese company, SoftBank, but the deal for selling Replicant collapsed. 

In a typically corporate way, Replicant was all but quashed. There was no further comment from Google about a revival; no hint at what would remain for the projects which had been left behind: a ‘human-like’ robotic hand, bi-pedal (two-legged) robots which ‘stood’ upright and could effectively climb, along with various other futuristic designs which were all worked upon behind closed doors at the company’s Mountain View headquarters.

All that has changed with the rebirth of Google’s focus on robotic hardware.

The revamped team is now led by Principal Scientist Vincent Vanhoucke, and by the looks of things they are keen to shift away from the show-boating efforts of Rubin’s rule. Where once there were dog-like constructions and walking robots, the focus is now on solving more immediate global issues with practical technology. There’s less focus now on the aesthetics of the hardware, and a more ‘back-to-basics’ view of the possibilities of automation and AI.

The shift in approach is reflected in the new name: Robotics at Google. Vanhoucke was a key figure in the development of AI mega-project Google Brain, and although outward appearances look simplified, there are more advanced technologies beneath the surface. 

Believe it or not, there is a very real possibility of these robotics being applied in the world of logistics and warehousing before many other sectors. There are already plenty of robotics used in warehouses across the world, and a number of further opportunities identified for development. But current robotics are only capable of handling one task: for example, turning identical screws repetitively, or picking up objects (one type of object, moving it in the same motion). 

Inside the Google robotics lab now is a machine called TossingBot. This robotic arm doesn’t look much more complex than the ones you’d see in a manufacturing plant or a 3PL centre, but the software difference is incredible. As demonstrated in a research paper published this month*, the robot was faced with a bin full of items including ping pong balls, bananas and plastic toys. When the machine was first presented with the bin, its task being to ‘toss’ specific items into an adjacent empty container, the arm wasn’t programmed with the direct code which would allow it to complete the task: in simple terms, the arm didn’t ‘know’ how to identify, pick up, and throw the items. Over 14 hours, the arm not only taught itself how to perform the task, but continuously analysed and adjusted its own success rate.  

Before you might scoff, it’s essentially an innate skill for humans to grasp an object and throw it again. But when broken down into separate acts and sets of calculations, it quickly becomes clear how difficult this might be: the shape of the object, the purchase or grip the robot ‘fingers’ have on it, the distance to the release destination (the empty bin), calculating the velocity required to get the object there considering its mass… 

To write the code which enables a machine to do this is incredibly complex. But the result actually saw the robot arm outperform the team of researchers by 5% accuracy. It goes without explanation that these intelligent robots could have significant uses inside distribution warehouses, such as those used by Amazon and UPS. The robotic arm has astonishing dexterity. It could be utilised for unloading cargo, consignment sorting, order fulfilment, packaging and the like. 

Other research projects in the same lab include training robotic hands to push, pull and spin objects, undoubtedly also useful in a warehouse setting. DHL attempted a robotic Parcel Robot back in 2003, but the technology wasn’t sufficiently mature to rollout completely. With newly mobile robots enabled with machine-learning capability, there is a real chance for progression. And, if we allow ourselves to think outside the warehouse, we can imagine a not-so-distant future where robots can clear debris in the event of an accident or consignment spillage. 

So, the question always arises: what does this mean for our current workforce? The employees who are already working in similar positions as those which Google and others have claimed could be filled by robots with higher efficiency? 

There will always be conversations to be had about robots taking away the jobs of people, and unfortunately even the best market analysts seem to disagree. I do not claim to be an expert but I will say this. There is a gap in the workforce in both distribution and in operations. Robotics could help to plug this gap, and should be used to do so in the cases of those more repetitive tasks with high risk of physical or cognitive fatigue, and other ‘non-skilled’ tasks. But what needs to complement this effort is the education and re-skilling of the current workforce. 

In order to keep our socioeconomic status stable and encourage growth, people shouldn’t be forced out of jobs, but instead re-trained and assigned roles with different – perhaps higher – responsibilities. It’s my hope that in the near future, the general population, children and adolescents in the academic system and the existing workforce will be exposed to these emerging technologies: the mechanics and the ideas behind its introduction. This seems to be how our future will remain economically viable. 


The TossingBot uses machine learning to decide how to grab an object, which it uses for more accurate throws.

Inside Google by NYT