The future fracturing of the internet: How access will define the web of tomorrow

The future of the web is a system that is completely different depending on the method of access, to the point where many will think of it as several completely different things, thanks to the future evolution of technologies such as virtual reality.

When the World Wide Web launched a quarter of a century ago, it was accessed on hulking desktop computers in university labs and the homes of the wealthy but nerdy.

Over time this spread, first to more affordable computers and then to laptops and palmtops, and finally on to smartphones and tablets.  Now we expect to be able to access the web in some form from almost every electronic device we own, including TVs, smartwatches, music players and more.

The abilities that the internet has given us have made us almost superhuman. We can find the answer to almost any question in moments, and learn almost any skill just through online resources.

In some countries the internet is now even regarded as a human right, something so important that it would be abhorrent to prevent people from accessing it.

many-devices

Evolving the web

The internet as we know it know is just a step on the road to what it will become. Just as it has moved far beyond the first web page, so will it continue to change and grow as technology allows.

Most interesting, however, will be the fact that it will evolve to become several different forms of internet, depending on the method of access.

We are already starting to see the embryo of this at present.

CSS3 mobile queries have enabled websites to appear differently depending on the device they are accessed from. While for most websites this just means a simplified version for smartphones, some have gone to greater extremes by tailoring content and in some cases serving completely different designs to suit the audience.

But this is nothing on what we are going to see in the future.

At present, while we might get different sites depending on whether we log on with a tablet or a desktop, we are always accessing the information in basically the same way.

However, our future selves might be accessing the internet through a number of different means, which require the information to be displayed in ways that are virtually incompatible.

futurama-internet

Virtual reality and the future of the internet

While some of these technologies are yet to be invented, there are a few that look likely to grow in use and dominance.

The most prominent of these is virtual reality. Oculus Rift is nearing consumer-readiness, and tech giants such as Sony have finally started to wade into the VR pool.

For most, VR is about gaming, but there is also a movement to make it work on the web.

For anyone who has dreamed about a fully immersive internet such as the one portrayed in the Futurama episode A Bicyclops Built for Two, the prospect is very exciting.

The leading work in this area is a project called Janus VR, which is an internet browser developed specifically for the Oculus Rift.

In its most basic form, Janus VR reinterprets the web as 3D spaces, with links as doors and images as pictures on a virtual wall. However, inventor James McCrae has also added Janus-specific code that web designers can add to any site they build.

Users browsing from regular computers won’t see any effects of the code, but if you visit the site with a Rift you could be met with a full 3D world, complete with interactive elements. Other users can even meet you there and communicate over voice or text.

Janus VR is very much in its infancy, buts its potential is obvious and support is growing. Before long it could become a common browsing method with its own set of standards, completely separate from those used for the traditional web.

Hearing the web through virtual assistants

The projected rise of virtual assistants – starting with today’s technologies such as Apple’s Siri and Google Now – also present a possible alternative version of the web.

Chris Brauer, co-director of CAST at Goldsmiths, University of London, recently said that virtual assistants (VAs) would in the future be our primary access point to the web.

We would ask questions of our own personal VAs, who would provide us with answers through their own web searches.

If VAs become this common, web design – or at least a part of it – but undoubtedly evolve to match.

Just as web design trends have closely followed the best approaches to getting a high Google ranking, the web’s content could be increasingly presented in a manner meant for virtual assistants, not humans, to access.

Given that some of us will still wish to access the web through traditional means, this information is likely to end up in its own separate space – a section of the internet only accessible by VAs just as the VR web is only viewable on a VR-compatible browser.

internet

The internet’s fractured future

Undoubtedly there will be other means of access that require different versions of the web for their own suiting, brought about by new developments in technology that are barely ideas at present.

All of this will result in an internet with many faces – although it will all be one system, the code for each access type will be unreadable by the others.

As a result the internet as we use it on different devices will be so radically different that non-techy users will think of it a several completely separate things.

The internet as we know it will be one of several, and may even fade into obscurity as other access methods become more popular.


Featured image courtesy of Sergey Galyonkin. Second inline image: screenshot from Futurama S2E13. Third inline image courtesy of Martin Deutsch.


Factor Reviews: Google Home

Once in a while I get the chance to try out a product that really makes me feel like I’m living in the future. Not because it feels outrageous or space-agey, but because it simply and effortlessly provides something that not all that long ago would have seemed like magic. Google Home, the smart home speaker and rival to Amazon’s Alexa, is one of those products.

Combining beautiful hardware design with a delightfully simple user interface, it’s an absolute pleasure to set up and use. Connecting to the supporting Android or iOS Google Home app – which if like me you are an Android user, you probably already have – the setup is very straightforward, with clear, easy to follow steps, and lovely little animations while you wait. And you don’t have to wait long: for me, the time from opening the box to starting to use it was less than 3 minutes.

Once set up, it is extremely easy to get going with Google Home. The initial setup includes suggested interactions to get you started, and it very quickly becomes second nature to ask the device questions, add notes or get it to start timers.

Which is good, because combined with the extremely long response distance – I found it worked fine from the other side of my flat – Google Home is an invaluable tool for cooking and other activities where you have your hands full.

Ok Google: a rapidly expanding knowledge base

When it comes to asking questions, Google Assistant is a very knowledgeable source, with the ability to answer accurately on subjects ranging from obscure celebrities’ heights to the distance between various planetary bodies. Sometimes I did find it unable to answer my query, but usually only when it required the cross-referencing of multiple knowledge sources. And when I broke a question down into several sub-queries, I didn’t struggle to find the answers I’d asked for.

There are also, if you are so inclined, rather fun interactive quizzes, which made for a bizarre but entertaining session with family members.

One of the best features, however, is the response to “Tell me about my day”, which includes weather, a roundup of any appointments (automatically synced from your Gmail account, of course) and a rundown of today’s headlines. It is not only futuristic but also genuinely helpful, and a feature I am increasingly using while having my morning coffee.

In addition, one of the real appeals of Google Home is how quickly the search engine giant is adding features. It has already improved – without any input from me – in the time I’ve been testing it, and it’s clear it will continue to do so in the future, seemingly far quicker than with rivals such as Amazon Echo.

Smooth sounds: Google Home’s voice

The UK edition of the Google Assistant should also be praised for its voice – I personally found the UK female Siri voice to be intensely irritating, sounding condescending and rather too much like presenter Holly Willoughby. By contrast Google’s chosen voice is helpful and supportive, and someone I could happily hear on a very regular basis.

This is a feature that cannot be under-estimated in a voice-based assistant.

It also, notably, was very good at responding to a host of different accents, although unfortunately I was not able to test it with some of the more extreme regional accents of the UK, unless you count some fairly rubbish attempts at Scottish, which to the device’s credit, it did respond to.

The speaker itself could be better, however, but not without adding significantly to the price: there are less bassy speakers out there, but none of them have a built-in assistant, and Google Home’s is certainly decent, just not amazing.

Killer connectivity: Chromecast, Spotify and more

One ability that makes the Google Home invaluable is its integration with services such as Spotify, and with hardware such as Google’s Chromecast.

The result is a device that will play almost any music you care to name, or will allow you to cast a TV show via Netflix simply using your voice. Which feels shiny and amazing.

However, the results can be less than perfect if there are multiple similar-named programmes from which it has to choose. Asking for Gilmore Girls, for example, seems to default to it playing 2016’s Gilmore Girls: A Year in the Life rather than the original, while if you want anything other than the Star Trek original series to play, you will need to specify.

The device also has some widely supported integration with home automation products such as smart bulbs, although I was not able to test these.

Insanely intuitive: Google Home’s ease of use

Despite all of these exciting features, the moment that really convinced me of Google Home’s specialness was when I introduced my boyfriend’s mum to it. For context, she is not a tech-savvy person: I have known her to need assistance to click ‘continue’ in an app on more than one occasion, and she is one of the most prolific adders of superfluous toolbars I have ever encountered.

So when I introduced her to this device, I expected the usual confusion and issues. Instead, she took to it better than any gadget I have ever seen her with. Within five minutes she was happily asking it questions and getting it to play music, and she now uses it without prompting or help whenever she visits.

Google, you have performed a miracle: I’m not sure this device could be more intuitive if it tried.

Google Home versus Amazon Echo

Of course, if you’re thinking about buying a Google Home, you’re probably wondering if it’s a better option than its main rival, Amazon Echo. And the honest answer to this is that it depends on what tech you have already, and what you want it for.

If you want to effortlessly buy things just by speaking, the Echo is a better shout. But if, like me, you’re all about finding out things and getting updates on what you need to do next, and do not want to make spending money any easier, then the Google Home is for you.

Similarly, if you already have Google products such as the Chromecast and Gmail, you’re in a better place to fully use this smart speaker, which, when fully utilised, is an absolute gem.

Factor’s verdict:

Drone spacecraft will let us explore inaccessible parts of galaxy: NASA scientists

Autonomous spacecraft are under development, and will in the future allow us to explore parts of the solar system, and later the galaxy, that are inaccessible to human explorers, according to NASA scientists.

Writing in the journal Science Robotics, Steve Chien and Kiri L Wagstaff, from NASA’s Jet Propulsion Laboratory, said that the next generation of space robots will be able to “think for themselves”, allowing them to continue to take readings, interpret data and detect notable geological events on other planets, even when out of contact with Earth.

“By making their own exploration decisions, robotic spacecraft can conduct traditional science investigations more efficiently and even achieve otherwise impossible observations, such as responding to a short-lived plume at a comet millions of miles from Earth,” the authors wrote.

An artist’s impression of the ongoing unmanned Juno mission. Above: An artist’s impression of the Cassini spacecraft passing through short-lived plumes on Saturn’s Enceladus moon. Images courtesy of NASA/JPL-Caltech

Autonomous spacecraft would tackle one of the biggest issues currently faced when working with unmanned spacecraft: communication blackouts. At present, work with unmanned craft is hampered by periods where communication is impossible, or very delayed. In these cases, the craft can often be sat idling, with no instructions to undertake until it gets back into contact.

However, advancements in artificial intelligence are increasingly making it possible for such spacecraft to continue to work without direct instruction, instead carrying out overall directives in response to the environments they are operating in.

“One goal of autonomy is to enable robots to detect and respond to unexpected conditions without sitting idle until the next Earth command arrives,” wrote Chien and Wagstaff. “In an exciting development, many spacecraft have increasing ability to make their own decisions and accelerate scientific discoveries.”

The spacecraft could even respond to short-lived events that normally would not be possible to capture due to the delay between scientists on Earth recognising such an event was happening and the spacecraft receiving an instruction to record data from it. One example of such an event would be active plumes, such as those recently observed on Saturn’s moon Enceladus, which may in the future be observed on other bodies including moons and comets.

The current best available image of the Alpha Centauri group, taken by the Hubble space telescope. Autonomous spacecraft could be used to make the 60-year journey to the system. Image courtesy of ESA/NASA

The advancements could even, the authors say, allow systems outside of our galaxy to be explored. The neighbouring system of Alpha Centauri, for example, would take a robot spacecraft 60 years to get to, making it a viable option for a drone craft to explore, but not one we could send humans to.

“The ultimate challenge for robotic science explorers would be to visit our nearest neighbouring solar system, Alpha Centauri,” Chien and Wagstaff wrote.

“Upon arrival, the spacecraft would need to operate independently for years, even decades, exploring multiple planets in the system. Today’s AI innovations are paving the way to make this kind of autonomy a reality.”