In 1997, Jürgen Schmidhuber and Sepp Hochreiter published a paper on a type of recurrent neural network which they called long short-term memory. In 2015, this was used in a new implementation of speech recognition in Google's software for smartphones. Both brilliant geniuses all do respect to these people.
These things sound unfathomable to us but in reality, it's not all that unlikely. Ray Kurzweils has a more hominid-centric prediction though; that we will merge with artificial intelligence and celebrate the joy of existence by turning most of the visible matter in the universe into hyper-intelligent super-computers. These things probably sound like Scientology if you're reading this, but keep in mind that they do not violate any known laws of physics. If there exists an entity whose intelligence is cosmic, then just about anything is possible.
For any of this stuff to be possible though we must survive all of our current existential threats, and there are many. The thought of nuclear war keeps me awake at night... It is absolutely terrifying that there is currently over 10,000 nuclear weapons, and only a fraction of that could cause an insane amount of damage and cause a worldwide nuclear winter and famine, along with destroying crops and ruin the atmosphere, all of which would lead to global destruction, destabilization, and billions of deaths.
Another interesting thought that I find myself stuck on is our civilizations progress and the Fermi paradox. If hyper-intelligent AI is inevitable, barring that we avoid catastrophe, then where are the other hyper-intelligent AI in the universe. We still haven't found evidence of hyper-advanced civilizations. Is this evidence that there is some limiting factor to our progress that will undoubtedly drive us to extinction before we can achieve AGI? Perhaps, or maybe we're witnessing something that has not yet occurred in the universe, however unlikely that that would seem.