Algorithms are like digital cooking recipes: sets of step-by-step rules of operation for computers and other electronic devices. Considerable power lies in the detail of each step. Just as leaving out key ingredients and some essential steps might result in a delicious chocolate cake or a complete failure, an algorithm entails step-by-step sequences of operations that solve particular computational tasks – and might lead to success or failure.
Clearly, algorithms can help solve much more complex tasks than those similar to baking cakes, and they can also come with much more complex side effects. Recipes do not tend to discriminate, but algorithms might. People can actually be discriminated against by the automated decisions made by an algorithm, perhaps when applying for a loan or a job. The warped recipes that some algorithms are may also have consequences for development. Promises of a better future need to be accompanied, or preceded, by promises of better algorithmic recipes.
Machine learning systems are a popular family of algorithms, which learn from data, and encode the patterns they find into structure models, such as rule systems, decision trees, or neural networks. Algorithms can be combined in creative ways and interact so closely and cleverly with human decisions and behaviour that these might even be called “algorithmic systems”.
An advanced algorithm can gather and process information, interact with other algorithms, and arrive at a certain conclusion. The conclusion may be a list of classifications and what they mean, such as when an image-recognition algorithm guesses whether an image is showing a chair or a car.
In other instances, algorithmic systems have more important repercussions. For example, judges1 1. Kehl, Danielle, Guo, Priscilla, Kessler, Samuel. Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. (July 2017). Responsive Communities. Available at: See all references in the United States use simple but proprietary machine-learning algorithms to assess whether a criminal offender is likely to commit a similar crime again. Algorithmic systems seem to reduce complexity and simplify human decision making by “basing decisions on the data”. But these algorithmic systems may fail to correctly predict who will be a repeat offender – and so far they seem at times to err on the side of racial profiling: minorities tend to be treated as higher risk. The bias stems from a history of racial bias in sentencing and arrests, resulting in skewed data that are fed into algorithmic systems.
Algorithms like these now influence many aspects of modern life in almost invisible ways – at least until they fail. In the past several years, the risk of discrimination and hidden biases embedded in algorithmic systems has triggered considerable debate, at least among those interested in the social dimensions of the unfolding algorithmic revolution. Examples include not only the use of machine learning that incorrectly assesses the likelihood of minorities committing a future crime: investigations have shown algorithmic systems also discriminate against citizens by providing unjustifiably low credit scores, denying healthcare for disabled people based on faulty historical data, and weeding out CVs in the recruitment process in a discriminatory way.
The key issue here is biases created by data. That is, if credit ratings or crime data entail biases towards certain groups in society, or omit them entirely – say, minorities in the United States or people from particular regions in a country – then algorithmic systems may propagate these patterns, and lead to discriminatory decisions. As some researchers have noted, “decisions produced by the algorithms are as good as the data upon which such decisions are computed and the humans and systems operating them”.2 2. Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33(3), 371–377. Available at: See all references
Increasing relevance
These concerns are relevant in two arenas that might be surprising to some: in development issues and for biosphere-based sustainability.
Algorithmic systems are already a fundamental part of the way we perceive, modify, and respond to the natural world around us. Researchers, policymakers, and practitioners make use of algorithms and their results in, for example, climate-change modelling, landscape planning, and fish stock assessments. Businesses employ image-processing algorithms to assess the presence of gold ores; 3D object recognition algorithms support deep-sea mining of rare earth minerals; algorithmic systems used in agriculture analyse weather and soil data to maximise production. And these are only a few examples of the many algorithms used across diverse sections of our societies.
Assuming that all of these applications are or will remain flawless in the face of changing social and ecological circumstances is unwise.3 3. http://www.cell.com/trends/ecology-evolution/abstract/S0169-5347(17)30161-1 Available at: See all references If we look closely, some of their shortcomings can be detected already.
An interesting example of the close interplay between algorithmic systems and the way people perceive and respond to environmental change is the application of REDD+ schemes in Indonesia to reduce emissions from deforestation and forest degradation, set up through the United Nations to reduce deforestation by offering economic compensation. As Robert M. Ochieng explains in his PhD thesis, the important monitoring, reporting, and verification systems supporting REDD+ schemes rely heavily on algorithms and data.4 4. https://www.cifor.org/library/6523/the-role-of-forests-in-climate-change-mitigation-a-discursiveinstitutional-analysis-of-redd-mrv/ Available at: See all references These underpin estimates of carbon mitigation metrics, and in the end they determine the resulting economic compensation to a country.
The warped recipes that some algorithms are may also have consequences for development.
During the system’s implementation in Indonesia, national stakeholders forcefully questioned the planned forest monitoring system. The reason was that the algorithms and assumptions embedded in the monitoring system were based on the ecological understanding of how an Australian forest works, rather than an Indonesian forest, and lacked the transparency they expected. While this particular issue has been resolved, it points to the importance of recognising that algorithmic systems are embedded in socio-political and ecological contexts, and have considerable influence over decisions important for biosphere-based sustainability and development.
Algorithms should not be allowed to fail quietly. For example, data indicating the hole in the ozone layer was overlooked for almost a decade before it was discovered in the mid-1980s. The extremely low ozone concentrations recorded by the monitoring satellites were treated as outliers by the algorithms and therefore discarded, which delayed our response by a decade to one of the most serious environmental crises in human history. Diversity and redundancy helped discover the error (see the Biosphere Code Manifesto, principle 5).
As algorithmic systems continue to be used in our interactions with the biosphere – in agriculture, forestry, fishing, and more – they should also be sensitive to an increased understanding of how ecosystems and the biosphere operate in the face of complexity, surprise, and change. Such algorithmic systems should not aim only at enhancing efficiency in resource extraction, for example, by maximising biomass production in forestry, agriculture, and fisheries. They should build on resilience principles and encourage learning, diversity, and redundancy.
A call for transparency
Algorithmic systems are becoming increasingly sophisticated and effective, through the application of machine learning and deep neural networks sometimes captured under the term “artificial intelligence”. They are also clearly finding an ever-growing universe of applications in sectors critical for biosphere stewardship and development.
A marketplace for predictive agriculture algorithms called PrecisionHawk now thrives. Here you can buy services based on the integration and processing of large datasets, or “big data”, that allow the user to optimise urban planning, large-scale fishing strategies, or agricultural investments, in near real-time; examples include Descartes Lab, DigitalGlobe and Orbital Insight.
Industrial-scale reforestation services can now use very large sets of real-time data to “create an optimised planting pattern”, for example with DroneSeed. There is also a growing community exploring “AI-D” or “AIForAll” – artificial intelligence for development with a special focus on the world’s most vulnerable communities. These are just a few examples.
While this flurry of innovation is welcome, ecological literacy as well as algorithmic transparency and accountability remains key. Algorithmic systems developed in the academic sphere – say, those underpinning climate projections – must be reviewed by peers in the science community, and should sometimes even be available for use by others to test if they generate consistent outcomes. Public funders such as development agencies can request that projects supported by algorithmic systems be made as transparent as possible. But given that such systems can change and adapt so quickly, single moments of transparency or scrutiny do not aways guarantee rigorous oversight.
Transparency may, however, become a concern in relation to the rapidly growing private sector. The reason is that innovative applications of advanced algorithmic systems are often viewed as protected by intellectual property rights. Perhaps this sounds farfetched. Consider, however, recent attempts by farmers in the United States to “hack” their modern farming equipment with Ukrainian firmware, as a way to work around expensive contracts with private actors that do not allow tweaks in the code embedded in their tractors.
Unfortunately, a number of challenges could prevent algorithmic transparency. One is that advanced algorithmic systems can be non-transparent even for experts, simply because of their inherent complexity or their use of highly abstract input data. Another is that algorithmic transparency can open up a product or service to abuse, and can thus be exploited.2 2. Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33(3), 371–377. Available at: See all references Some even argue that an overemphasis on transparency could make algorithmic systems “stupid ”.
Yet advancements in algorithmic systems may also increase transparency and advance sustainability at the same time. That’s at least the ambition of new partnerships aiming to fully explore the potential of the algorithmic revolution for sustainability, such as “AI for Earth”, or those interested in novel applications of blockchain technologies that are going to be discussed during the AI for Good 2018 event in Geneva.
The applications of blockchains, in for example the cryptocurrency bitcoin, have generated considerable buzz in the media lately. Blockchains are created to be unmanipulable, unhackable, and decentralised – these characteristics and other reasons give blockchain technologies their acclaimed potential to drastically increase supply chain transparency and help tackle deforestation as well as illegal and inhumane fishing practices.