Possible futures

  • Machine learning seems to be at the forefront of “big data” because it will (seemingly)allow for predictive analysis.
  • The volume of data we are creating right now and the volume of data that will be created in the future will make privacy even more important while the stakes will be enormous, for control, for-profit and for humanitarian purposes.
  • With an increased volume of data, demand for data scientists, analysts and data management experts will also increase (and their salaries).
  • The rise of the Chief Data Officer (CDO).
  • Algorithms will become a commodity.
  • More investments in “big data” (the next bubble)
  • More development of “big data” applications with prescriptive analysis capabilities.
  • China wins the AI race because it has no privacy laws or ethical barriers holding back technology development, especially not those that serve the state and government; cheap and very sophisticated resources are available everywhere; so are vast and incredibly rich amounts of data and experiences that have already been collected (from facial recognition to human interactions with social currencies), and China has a government that controls industry and already created a surveillance state.
Amtis travelled forward in time and found four different futures: one where data is treated like property and data markets are created; one where people are paid for data as labour; one where data is stored in nationalised funds; and one where users have clear rights concerning their data. ~ Valentina Pavel, Our data future, four scenarios.

Anthropocentric mindsets

  • In a perfect world, our personal data would be considered private and not leave us. But as soon as we give information to someone else, it is is exposed and can (and most likely) will be abused. Privacy by design is still a way off. There may be new laws like the GDPR, but it is hardly being enforced. There is no active oversight.
  • Security is a cat and mouse game, played by specialised security experts, who are throwing good effort after bad, because developers focus on new features (for profit) and do not focus on security and privacy by design. We need to break out of that. An approach of applying AI and understanding attacking behaviours can get us there.
  • It is not about the next algorithms. It is about how algorithms are used. And algorithms used out of context can be dangerous.
  • Deep learning can not solve real problems. For example, visibility into what devices and users are on a network has nothing to do with AI. It is an engineering problem and is still not solved. Artificial intelligence can only be used to augment human capabilities, but domain and security experts are still needed which really understand the problems and include that which gets overlooked.
  • And the big blind spot, again, is that we live on a finite planet with finite resources. All of the futures in which digital presences persist, assume infinite resources (on a finite planet) to be used in ravenous runs of commodification, or, that the problems will solve themselves if sufficiently ignored. Maybe mums will come to clean up our mess, or perhaps one of the Gods will?
 
 
  • Last modified: 2020/03/20 21:11