Global Equities

AI is running far ahead of attempts to resolve its ethical issues

The response in time is likely to be more regulation.

AI is running far ahead of attempts to resolve its ethical issues

May 2019

In the Wisconsin city of La Crosse in 2013, Eric Loomis, then in his early 30s, pleaded guilty to eluding police in a stolen car (while denying any role in a drive-by shooting involving the vehicle). The judge sentenced Loomis, who had a criminal record, to six years in jail, the longer end of possible terms.

So? Well, the judge based Loomis’s prison term partly on the recommendations of an artificial-intelligence, or AI, program. Under the ‘Compas classification’,[1] secret algorithms compare information about the defendant against a criminal database to assess the risk that person poses. In Loomis’s case, the prosecutor told the trial judge that the Compas report showed “a high risk of violence, high risk of recidivism”.[2]

Loomis appealed the length of the sentence saying he had no opportunity to evaluate the algorithms and their assessment based on his gender violated ‘due process rights’. At a court hearing, one expert witness said the Compas program had a "tremendous risk of over estimating an individual's risk”.[3]

The court’s use of AI to sentence Loomis attracted much criticism,[4] including from within the tech community,[5] because it raised questions about the role that ‘big data’ and AI are playing in everyday decisions. Expect more such controversies for society to solve as AI’s rapid deployment is creating many ethical issues – a gentler way of saying AI is capable of ill as well as good.

AI is certainly causing concern – Henry Kissinger cautions that it’s “how the enlightenment ends”[6] while the late Stephen Hawking warned “it could spell the end of the human race”.[7] Among potential dangers, AI might be used by despots who want to enforce censorship, micro-target propaganda and impose society-wide controls on citizens. Many think the disinformation, conspiracy theories and echo chambers that AI-driven recommendations engines can promote on social media deepen social tensions. AI can be used in warfare. The technology has the potential to make swathes of workers redundant. Used inappropriately, AI can act discriminately or invasively. Many worry about the privacy violations surrounding the data used to train and improve AI algorithms. (Hawking was warning about ‘superhuman AI’, which as it is still an aspiration is not a pressing ethical concern.)

Many of the concerns about AI are tied to the nature of the algorithms. People worry that society is handing over decision-making to secret software codes – essentially instructions – that have no understanding of the context, meaning or consequences of what they do. They fret algorithms are being entrusted with tasks they are incapable of fulfilling and they magnify the biases and flaws of their human coders and the data inputted. People are concerned about how algorithms can manipulate behaviour and promote digital addiction. They see they can be gamed by attention seekers, from anti-vaxxers[8] to populist politicians and extremists.

People are tackling some of the ethical issues involved. Researchers have withheld AI because of possible misuse.[9] Governments, notably the EU, have acted to protect privacy. The EU is developing an AI code of ethics.[10] Companies are creating principles around AI use – Google offers seven.[11] Businesses are setting up ethical boards to monitor its deployment. Platforms are using AI to inhibit the ability of other algorithms to spread viral extremist content. Data gatherers are better protecting user information. US tech employees are rebelling against AI’s use in warfare.

But not enough might be happening to limit AI’s possible harm. People seem blasé about how their online data trails are used to sway their spending and thinking. Businesses appear far more focused on generating positive returns from AI than in overseeing and mitigating the negative side effects. Autocratic states such as China are increasingly using AI to tighten their control over media and communication. When ethical issues are raised, valid rebuttals can result in inaction. Authorities with genuine concerns appear hobbled because of the public’s fondness for the cyberworld.

Be aware that AI is being deployed at a faster rate than ethical issues can be properly identified and resolved. The moral concerns encircling AI are likely to become big enough political issues in time to warrant much public scrutiny and government intervention.

To be sure, many of the ethical issues raised are broader than AI. Some of the tech’s biggest ethical issues, such as gene-edited babies, are away from AI. Protests against tech’s use in warfare reach beyond AI too,[12] as do the problems with data gathering. Discussions about ethics could prove divisive and prolonged. Many of the ethical issues swamping AI are everyday ones that are as old as humans – AI is just a new setting for them.

But that fresh setting looms so large AI is bound to spark controversies, especially since AI’s political weakness is that it’s easy to demonise. Expect a rigorous human overlay on AI in due course.[13] The challenge for authorities will be to limit AI’s possible harm without suppressing its advantages.

AI angst

Amid concern that China is ahead in key AI areas, the Pentagon in June last year said it would create a Joint Artificial Intelligence Center so the military could work with “industry, academia” and others to pursue AI applications that “will change society and ultimately, the character of war”.[14]

Fat chance, though, of full industry cooperation. The announcement came just days after Google employees forced the company to quit a project[15] that oversees AI use in drone attacks.[16]

Google’s decision triggered much criticism,[17] including from Amazon.[18] Why should Silicon Valley employees hold a veto over the US military’s ability to protect the nation’s interests as defined by elected officials? One possible answer is the tale of how Dow Chemicals (now part of DuPont) incurred billions of dollars in damages for producing napalm for US use in Vietnam in the 1960s.[19] Nonetheless, a key ethical issue raised is how much power companies can have in a democratic nation-state that, by definition, has defendable borders.[20]

While the tech industry is split on aiding the military, it is united when fighting against privacy exemptions for intelligence agencies because it says ‘back doors’ and ‘master keys’ provide openings for hackers. A crucial ethical issue here is how much privacy citizens should be forced to give up to allow governments to provide a safe society, as well as what role, if any, private companies should have in setting these limits.

The tech industry is unified again in frowning on regimes that use AI to enshrine their power.[21] Even so, western facial-recognition companies can’t escape the ethical issue that their advancements might entrench autocrats to such an extent that despots might pose a “mortal threat” to open societies, according to George Soros.[22]

Another society-level issue causing ethical headaches is that AI might destroy so many jobs that a “new serfdom beckons”.[23] Society-wide solutions proffered include guaranteed income or employment for everyone. But critics say ‘universal basic income’ is too expensive, promotes a disincentive to work, does not generate the sense of purpose that people crave, and turns much of the population into a welfare community. They say a ‘federal jobs guarantee’ is too costly, impractical and does little to reduce inequality. The inequality and other ethical issues tied to massive AI-driven job losses would become prominent if such redundancies were to coincide with a downturn.

So much data

When people search the web[24] or use digital platforms[25], their extensive digital trails flow into algorithms to optimise the services to capture their attention. Many people are unaware of the extent of the data collection and how it is used to profile and target them. Scandals surrounding data have triggered much ethical discussion about privacy and data rights.

Pushes are underway to give users more ownership and control over their data. A UK inquiry in 2019, for instance, has called for a deterrent-enforced code of conduct on data gathering and for users to have the ability to move their data to other parties and make their data available to competitors.[26]

While data ownership is contentious, protecting privacy appears a straightforward issue in ethical terms. A consensus has formed that companies should seek user consent on fair terms and have no right to collect data covertly. One example of underhand data gathering is when platforms secretly track users after they leave their sites. Another is the way facial-recognition companies ‘scrape’ photos on the internet – use any published photo – to hone the technology.[27]

These practices risk regulatory action. But where fits the practice whereby AI is used to scour social media to assess babysitters [28] or by insurance companies to set premiums for policyholders?

New York State in January allowed life insurers to use predictive models to comb social media to see if policyholders take part in unhealthy or dangerous practices or have faked claims.[29] The ethical issues include the ability of the algorithms to deliver a fair outcome, the transparency of the process and the right of appeal. The only safeguards New York State enshrined were that the information gathered must be sound in actuarial terms, relevant and not be “unfairly discriminatory”.[30]

Flawed codes

The algorithms that power AI are reams of code that can process data efficiently to assist in making parole, medical, military, work-dismissal, university-admission and many other decisions. These instructions can perform vast analysis within these narrow functions at speeds beyond human ability. They can recognise patterns in the data with much greater granularity and nuance than humans can. They often generate surprising and counter-intuitive conclusions that few humans could arrive at.

But algorithms lack many human qualities and smarts. These algorithms do not understand the cause and effect of their decisions. They lack common sense, emotion, imagination and senses of humour or irony. They have no free will.[31] They can have inbuilt biases, generally delivered by the data that drives them. They can be gamed and outsmarted. Many aren’t that sophisticated. The ethical issue is: How can society justify the handing over of vital decision-making to AI when it falls well short of human ability in so many ways?

The ethical cloud over algorithms is highlighted when they are set tasks beyond their design limits. Platforms and others, for example, augment human oversight with ‘content moderation’ algorithms to scour for ‘hate speech’, bullying, harassment and worse (while also relying on alerts from the public, which then help train the algorithms). The algorithms keep much out. But enough escapes them. Algorithms have often failed to remove all copies of an offensive video because people can alter the footage enough to outwit algorithms that can only look for earlier versions.[32] As Facebook concedes: “The more we do to detect and remove terrorist content, the more shrewd these groups become.”[33]

A wider ethical issue is whether or not AI-dependent platforms should be responsible for the content shared and viewed on their platforms, whereas now they bear no legal responsibility provided they take offending content down once notified. Another ethical issue is whether or not private companies should be monitoring the ‘cyber public square’ – that private companies are acting as censors and judges of what’s appropriate, something that Zuckerberg recently conceded Facebook shouldn’t be doing.[34] And what is the responsibility of users in all this? A core problem with content on social media is that enough people are prone to interact with the most sensational and vilest content. Doing so, users prompt the AI set up to boost their engagement to feed them more of the same.

Away from terrorism and violent crime, perhaps the most consequential example of how algorithms can be exploited is Russia’s manipulation of platforms during the US elections in 2016. Since then, working with intelligence agencies around the world, Facebook’s ‘feed’ algorithms that rank and personalise content have become better at policing against political manipulation. It is ironic that the most effective solution to AI’s failures to protect democracy appears to be more AI.

Another ethical issue to resolve with AI is whether or not to let algorithms operate in situations with infinite possibilities (such as powering driverless cars on open roads) when, for now, AI works best in defined conditions (such as translation, sophisticated board games or, in the case of driving, keeping a car within white lines on a highway). The death of a woman crossing a road at night in Arizona by a self-driving car in 2018 highlighted how AI programs can prove fatal in uncontrolled situations. A central ethical issue here is whether or not the hopes that autonomous vehicles might one day reduce road fatalities is worth the loss of life in the experimental stage. Another ethical issue to resolve in such situations is who might be responsible when things go awry. Volvo in 2015 said it would be liable for all accidents involving its driverless cars.[35]

Another prominent flaw of algorithms and data is that they promote the biases of code writers and data. Amazon acknowledged this limitation in 2018 when it stopped using algorithms to sort job applications because they promoted gender biases against employing women.[36] The problem here is that data, as a record of the past, feeds algorithms the prejudices of the past. While no one defends discrimination per se and code writers can attempt to overcome this flaw, the ethical issues require subjective solutions – witness the debates around whether or not discrimination has occurred, the use of minority quotas and the risk of ‘reverse discrimination’.

Data with gender, race and other biases and the limits on the abilities of algorithms are prompting calls for algorithms to be regulated. Companies could come under pressure to reveal their algorithms, as France is doing with those used by the government.[37] The tech industry, however, resists such transparency, saying their formulae are intellectual property.

Such ethical issues around AI are prompting reassessments of the technology, as shown by talk of a second “AI winter’ (when research and deployment stalls),[38] a surge in expert warnings of its potential harm,[39] and by the number of recent books highlighting its flaws, such as Meredith Broussard’s Artificial Unintelligence.[40]

While the Loomis appeal was rejected by the Wisconsin Supreme Court in 2016[41] and the US Supreme Court in 2017 refused to hear the case,[42] the ethical issues it raised will be among many that surround AI as its deployment brings many advantages to society.

By Michael Collins, Investment Specialist

 


[1] Equivant website. ‘Compas classification’. equivant.com/compas-classification/

[2] The New York Times. ‘Sent to prison by a software program’s secret algorithms.’ 1 May 2017. nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html

[3] Document from the Supreme Court of Wisconsin. Case No: 2015AP157-CR.  ‘State of Wisconsin, Plaintiff-respondent v. Eric L. Loomis, Defendant-appellant. Page 11. scotusblog.com/wp-content/uploads/2017/02/16-6387-op-bel-wis.pdf?mod=article_inline

[4] The New York Times. ‘In Wisconsin, a backlash against using data to foretell defendants’ futures.’ 22 June 2019. nytimes.com/2016/06/23/us/backlash-in-wisconsin-against-using-data-to-foretell-defendants-futures.html 

[5] See WIRED. Courts are using AI to sentence criminals. That must stop now.’ 17 April 2017. wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/

[6] Henry Kissinger. ‘How the enlightenment ends.’ The Atlantic. June 2018 issue. theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/

[7] BBC News. ‘Stephen Hawking warns artificial intelligence could end mankind.’ 2 December 2014. www.bbc.com/news/technology-30290540

[8] The World Health Organisation lists ‘vaccine hesitancy’ as one of the “Ten threats to global health in 2019”. See: who.int/emergencies/ten-threats-to-global-health-in-2019 

[9] See Financial Times. ‘Musk-backed AI group delays releasing research over ‘fake news’ fears.’ 15 February 2019. ft.com/content/b9f9ec2c-301d-11e9-8744-e7016697f225

[10] European Commission. Media release. ‘Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards.’ 9 March 2018. europa.eu/rapid/press-release_IP-18-1381_en.htm

[11] Sundar Pichai, CEO Google. ‘AI at Google: our principles.’ 7 June 2018. blog.google/technology/ai/ai-principles/

[12] Microsoft workers don’t like the military’s use of augmented reality; some tech workers even oppose helping the military in non-lethal areas such as cloud computing. Gizmodo. ‘Employees protest Microsoft bid for huge military contract, saying it could cause ‘human suffering’. 15 October 2018. gizmodo.com.au/2018/10/employees-protest-microsoft-bid-for-huge-military-contract-saying-it-could-cause-human-suffering/

[13] Even tech giants such as Microsoft concede that “new, evolved laws” – that is to say, proper human oversight and supervision – are needed to govern AI. Crux. Catholic News Service, a US news agency that reports on the Roman Catholic Church. ‘Pope meets head of Microsoft to discuss ethics in technology, AI.’ 14 February 2019. cruxnow.com/vatican/2019/02/14/pope-meets-head-of-microsoft-to-discuss-ethics-in-technology-ai/

[14] Deputy Secretary of Defense. ‘Subject: Establishment of the Joint Artificial Intelligence Center.’ 27 June 2018. admin.govexec.com/media/establishment_of_the_joint_artificial_intelligence_center_osd008412-18_r....pdf

[15] The New York Times. ‘Google will not renew Pentagon contract that upset employees.’ 1 June 2018. nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html?module=inline

[16] Trump is authorising what many consider extra-judicial killings of non-combatants by drone, though a lower rate than did President Barack Obama. Sometimes US citizens have been droned, to make the practice even more controversial. See, Foreign Policy. ‘The drones are back.’ 26 March 2018. foreignpolicy.com/2018/03/26/the-drones-are-back/. In another controversy tied to droning, Trump in March revoked a requirement that intelligence officials report the number of civilians killed in drone attacks. See, Bloomberg. ‘Trump cancels US report on civilian deaths in drone strikes.’ 7 March 2019. bloomberg.com/news/articles/2019-03-06/trump-cancels-u-s-report-on-civilian-deaths-in-drone-strikes?utm_source=nextdraft&utm_medium=email

[17] Google in another instance of non-cooperation in October 2018 stopped bidding for a US$10 billion cloud-computing contract with the military after staff protested. Gizmodo. ‘Google drops bid for massive military cloud computing contract amid employee pressure.’ 9 October 2018. gizmodo.com.au/2018/10/google-drops-bid-for-massive-military-cloud-computing-contract-amid-employee-pressure/

[18] Amazon CEO Jeff Bezos says: “This is a great country and it does need to be defended.” WIRED. ‘Amazon’s Jeff Bezos says tech companies should work with the Pentagon.’ 15 October 2018. wired.com/story/amazons-jeff-bezos-says-tech-companies-should-work-with-the-pentagon/

[19] See The New York Times. ‘Why napalm is a cautionary tale for tech giants pursuing military contracts.’ 4 March 2019. nytimes.com/2019/03/04/technology/technology-military-contracts.html?emc=edit_mbau_20190304&nl=morning-briefing-australia&nlid=7946863020190304&te=1

[20] In any crisis, the tech industry would find itself subservient to the state. In the meantime, non-cooperation will help the arguments of those who push the ethical argument the tech giants are too powerful for society’s good.

[21] See The Wall Street Journal. ‘The autocrat’s new tool kit.’ 15 March 2019. wsj.com/articles/the-autocrats-new-tool-kit-11552662637?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosam&stream=top

[22] George Soros. ‘Remarks delivered at the World Economic Forum’ 24 January 2019. The relevant full quote is: “What I find particularly disturbing is that the instruments of control developed by artificial intelligence give an inherent advantage to authoritarian regimes over open societies. For them, instruments of control provide a useful tool; for open societies, they pose a mortal threat.” georgesoros.com/2019/01/24/remarks-delivered-at-the-world-economic-forum-2/

[23] Robert Skidelsky. ‘The AI road to serfdom?’ Project Syndicate. 21 February 2019. project-syndicate.org/commentary/automation-may-not-boost-worker-income-by-robert-skidelsky-2019-02?utm_source=Project+Syndicate+Newsletter&utm_campaign=d78ca17867-sunday_newsletter_24_2_2019&utm_medium=email&utm_term=0_73bad5b7d8-d78ca17867-105721621&mc_cid=d78ca17867&mc_eid=736d1f7af2

[24] See Axios. ‘What Google knows about you.’ 11 March 2019. axios.com/what-google-knows-about-you-3f6c9b20-4406-4bda-8344-d324f1ee0816.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiosam&stream=top

[25] See, Axios. ‘What Facebook knows about you. 2 January 2019. axios.com/facebook-personal-data-scope-suer-privacy-de15c860-9153-45b6-95e8-ddac8cd47c34.html

[26] UK government. Independent report. ‘Unlocking digital competition, report of the digital competition expert panel.’ 13 March 2019. gov.uk/government/publications/unlocking-digital-competition-report-of-the-digital-competition-expert-panel

[27] See NBC News. ‘Facial recognition’s ‘dirty little secret’: Millions of online photos scraped without consent.’ 12 March 2019. nbcnews.com/tech/internet/facial-recognition-s-dirty-little-secret-millions-online-photos-scraped-n981921

[28] US-based Predictim is one of the many companies that offer AI-based services that analyse people’s speech, facial expressions and online postings. Predictim software was banned from Facebook and Twitter because it analysed the online activity and speech of babysitters for their likelihood to bully, act badly or take drugs without explaining how. See, BBC. ‘Predictim babysitter app: Facebook and Twitter take action.’ 27 November 2018. bbc.com/news/technology-46354276

[29] The New Yorker. ‘Why the life-insurance industry wants to creep on your Instagram.’ March 2019. newyorker.com/culture/cultural-comment/why-the-life-insurance-industry-wants-to-creep-on-your-instagram?utm_campaign=aud-dev&utm_source=nl&utm_brand=tny&utm_mailing=TNY_Daily_022619&utm_medium=email&bxid=5be9fbe92ddf9c72dc896f2b&user_id=49424385&esrc=&utm_term=TNY_Daily

[30] New York State. Department of Financial Services. ‘Insurance circular letter No 1 (2019). Re: Use of external consumer data and information sources in underwriting for life insurance.’ 18 January 2019. dfs.ny.gov/industry_guidance/circular_letters/cl2019_01

[31] See Alison Gopnik. ‘What AI is still far from figuring out.’ 20 March 2019. wsj.com/articles/what-ai-is-still-far-from-figuring-out-11553112473

[32] See Time. ‘“A game of whack-a-mole.” Why Facebook and others are struggling to delete footage of the New Zealand shooting.’ 15 March 2019. time.com/5552367/new-zealand-shooting-video-facebook-youtube-twitter/?utm_source=time.com&utm_medium=email&utm_campaign=the-brief-pm&utm_content=2019031520pm&xid=newsletter-brief&eminfo=

[33] Facebook Newsroom. ‘Hard questions” What are we doing to stay ahead of terrorist?” 8 November 2018. newsroom.fb.com/news/2018/11/staying-ahead-of-terrorists/

[34] Mark Zuckerberg. CEO Facebook. ‘Four ideas to regulate the internet.’ 30 March 2019. Facebook newsroom. https://newsroom.fb.com/news/2019/03/four-ideas-regulate-internet/

[35] BBC. ‘Who is responsible for a driverless car accident? 8 October 2015. bbc.com/news/technology-34475031

[36] Reuters. ‘Amazon scraps secret AI recruiting tool that showed bias against women.’ 10 October 2018. reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[37] WIRED. ‘Emmanuel Macron talks to WIRED about France’s AI strategy.’ 31 March 2018. wired.com/story/emmanuel-macron-talks-to-wired-about-frances-ai-strategy/

[38] See, Filip Piekniewski, US computer expert. Piekniewski’s blog. On limits of deep learning and where to go next with AI. ‘AI winter is well on its way.’ Posted May 2017. piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/

[39] See CBInsights. ‘How AI will go out of control according to 52 experts.’ 19 February 2019. cbinsights.com/research/ai-threatens-humanity-expert-quotes/?utm_source=CB+Insights+Newsletter&utm_campaign=01f43bf751-TuesNL_02_19_2019&utm_medium=email&utm_term=0_9dc0513989-01f43bf751-91508793

[40] Meredith Broussard. Artificial Unintelligence. How computers misunderstand the world. MIT Press. 2018.

[41] Document from the Supreme Court of Wisconsin. Pages 26 and 27. Op cit.

[42] US Supreme Court “Order List: 582 US. Certiorari – summary dispositions.’. 26 June 2017. Page 5.  supremecourt.gov/orders/courtorders/062617zor_8759.pdf. See also SCOTUS blog. ‘Loomis v Wisconsin. Petitions for certiorari denied on June 26, 2017.’ scotusblog.com/case-files/cases/loomis-v-wisconsin/

Important Information: This material has been prepared by Magellan Asset Management Limited (‘Magellan’) for general information purposes and must not be construed as investment advice. This material does not constitute an offer or inducement to engage in an investment activity nor does it form part of any offer or invitation to purchase, sell or subscribe for in interests in any type of investment product or service. This material does not take into account your investment objectives, financial situation or particular needs. You should read and consider any relevant offer documentation applicable to any investment product or service and consider obtaining professional investment advice tailored to your specific circumstances before making any investment decision. This material and the information contained within it may not be reproduced or disclosed, in whole or in part, without the prior written consent of Magellan. Any trademarks, logos, and service marks contained herein may be the registered and unregistered trademarks of their respective owners. Nothing contained herein should be construed as granting by implication, or otherwise, any licence or right to use any trademark displayed without the written permission of the owner.

Statements contained in this material that are not historical facts are based on current expectations, estimates, projections, opinions and beliefs of Magellan. Such statements involve known and unknown risks, uncertainties and other factors, and undue reliance should not be placed thereon. Additionally, this material may contain “forward-looking statements”. Actual events or results or the actual performance of a Magellan financial product or service may differ materially from those reflected or contemplated in such forward-looking statements.

Certain economic, market or company information contained herein has been obtained from published sources prepared by third parties. While such sources are believed to be reliable, neither Magellan or any of its respective officers or employees assumes any responsibility for the accuracy or completeness of such information. No person, including Magellan, has any responsibility to update any of the information provided in this material. 

How to invest

  • Magellan offers two market-leading strategies, global equities and global listed infrastructure. Find out how easy it is to invest in the world’s best companies, as chosen by Magellan’s experts.

  • Global equity products

    You buy from the world’s best companies, so why not invest in them? Magellan offers a range of highly-rated global equity funds, containing some the world’s best companies that we believe are positioned to benefit from long-term investment tailwinds.

    Find out why investing in a diversified global fund makes sense.

    Invest in global equities
  • Global listed Infrastructure products

    Infrastructure: Supporting you every minute of every day. Our range of top-rated global listed infrastructure funds are positioned to generate inflation-protected, stable yet solid returns.

    Discover why including listed infrastructure in a diversified portfolio can enhance returns and reduce portfolio risk.

    Invest in global infrastructure