Search the world with 36 nations plus UN plus IMF agreeing with china that coastal ports and railRoads link over 90% of world trade $BR0 china
#BR12 UN IMF #BR1 japan far east isles asean10 Malaysia #BR2 Bangladesh india Pakistan #BR3 Russia #BR4 central euro/asia #Br5 west euro Italy Switzerland #BR6 n america #BR7 UAE mideast #BR8 med sea nations #BR9 africa - egypt rwanda #BR10 Latin AM #BR11 Arctic/polar

top Belt Road Maps of 2018 s Entrepreneurial networks best cases in China & Bangla:
BillionGirlsBoys ask: can every banker/educator see their trust in Belt Road's top 100 stories.. Is Trump King Canute? Valueless is The economist whose world trade maps fail poorest billion youth's livelihoods in our children's worldwide

(BRI) Belt Road Imagineering is now trusted by 70 national leaders as empowering the sustainbility generation- which of these 100 stories can help bankers or educators near you join in to this system for mapping win-win trades aligned to the sustainability goals generation? portal 1

catalogue world record jobs creators by 13 BRI maps- tour BR clubs- EWTP celebrate first people freed by e-commerce and jack ma map top 13 sdg world trade routes 0 inside china, 1 East-Belt,
2 South-Belt; 3NorthBelt
4 centre eurasia &E.Euro; 5WEuro 6 N.Am; 7 MidEast8MedSea 9Africa 10LatinAm11 Arctic Circle 12UN-urgent....
BELT Road quiz

Belt quiz is about earth's seas and coastal belt - which coastal belt is your country most dependent on, does if have a superport connecting maps of world favorite superports, do your peoples have access to this superport (nb we recommend analysing countries imprt and exports by 1 energy, 2 all other goods
Road quiz : what are your continents longest roads (designed as including all of railroad or car-road, pipes for energy, water, sanitaion; tech cables)- do your peoples have access to the great roads

technology now permis us to play game: which peoples have been most deprived by accidents of history to basic belt road freedoms- among 10 most populated nations no people have been less included than those in bangladesh- tell us where else you map.....................
today offers the livelihood learning network poorest billion communities need most - 40 years ago
online library of norman macrae--.........................Entrepreneurial Revolution - curriculum: how to value small enterprise and sustainability exponentials of net generation - by alumni of Norman Macrae The Economist 1968. By 1976, Norman best news ever: the fifth of the world (whose brand reality is) Chinese can be valued by netgen as critical friends to uniting sustainability race for planet and humanity
eg EWTP : 21st C version of Silk Road of celebrated by Marco Polo and Hangzhou goal 14 oceansAIIB 1 ted hosts -- 2017 year of mapping sustainability banking -china to commercialize 5g by 2020 -valuing culture -jack ma 1 2e3 maps 1) countries joining Chinese inspired sustainability open systems solutions as well as 2) which global youth professions (eg coding) are mapping value sustaining trades with china

Monday, May 20, 2019

Keynote: AI Ethics and Governance

Global Artificial Intelligence Technology 2019 Conference

In late May Wendell Wallach, a researcher at Yale University’s Interdisciplinary Center for Bioethics and Chair for the Center’s working research group on Technology and Ethics, made his keynote address at the 2019 Global Artificial Intelligence Technology held in Nanjing, China, at the Berggruen Institute’s invitation.
Wallach prefaced his remarks by discussing global normative attitudes towards AI ethics. While noting that there are techno-optimists and techno-pessimists, “most of us perceive technology as a source of both promise and productivity, and yet there is considerable disquiet over the trajectory of scientific research and the deployment of certain technologies,” he said.
Wallach began by describing the status quo of AI ethics principles. Citing the “AI for Good” Global Summit in Geneva on May 28, 2019. He noted that there is general international consensus that AI should be developed to maximize welfare for all. Wallach continued that reconciling the current forty-two overlapping frameworks denoting principles of AI ethics, should be a next priority. This entails recognizing when there is a consensus, but also when true differences are in evidence.
There are four primary values appear on nearly every AI ethics list, Wallach continued: 1) Privacy, 2) Accountability, 3) Fairness (minimizing bias), and lastly Transparency. He touched on the nature of public-private interactions and the increasing scrutiny corporations in China and elsewhere, are facing as a result of the complexities of managing particularly privacy in tandem with sustainable AI development.
The present and future use of computer technologies to manipulate public behavior and attitude for marketing and political purposes, particularly through social media, was discussed. With the banning of the use of facial recognition software in San Francisco by the police, Wallach emphasized that the issue of using large databases of personal information for surveillance, by both police and non-police, would become a central issue. While circumstances may lead the governments of countries and regions to different attitudes toward the use of technology for mass surveillance, humanity should not sacrifice accepted norms protecting individual freedoms and the rights of minorities.
Wallach also stressed the challenge of mitigating the presence of algorithmic biases – for  example, race and gender-based discrimination – inherent in input data that reiterates existing human biases. Biased outputs should not, for example, be used to make employment decisions. However, he believes it isn’t currently possible to eliminate all biases in either the input or output data efficaciously. We can, nevertheless, do our best to recognize those biases that remain and enact efforts to minimize their impact on decision making.
Concerning transparency, Wallach believes corporations shouldn’t deploy AI for mission critical applications, or any application that may harm people, if they cannot explain the process through which the AI operates or reaches a result. Connecting this thought with the need for accountability, he highlighted the unpredictable and potentially dangerous nature of Complex Adaptive Systems and learning algorithms that are black boxes. Nevertheless, Wallach appreciates that ‘unexplainable’ AI will not be harmful in many circumstances: for example, medical AI, through its significant scientific contributions, should be developed as long as the output and its application are controlled by scientists.
Wallach also discussed “technological unemployment,” a term coined by the British economist John Maynard Keynes in 1930, to capture the longstanding fear that each innovation and technological development would steal more jobs than it creates. Regardless of overall trajectory of job growth or decay caused by AI, Wallach believes serious disruptions in the available jobs due to automation is inevitable. He mentioned how the Chinese government is sensitive to employment concerns, but wage growth is spiking in the tech sector and there is pressure to automate many of those jobs.  Nevertheless, he warns against overly automating entire sectors and industries for the promise of small efficiency gains, as these gains may be accompanied by ballooning societal costs to provide for the needs of those no longer employed.
According to Wallach, lethal autonomous weapons, is a pressing topic requiring serious attention. Lethal autonomy is not a weapon system, it is a feature set that selects and destroys a target and can be added to any weapon system including a nuclear weapon or a high-powered munition. Even relatively dumb lethal autonomous weapons will undermine robust command, control, and accountability and may initiate new wars or unintentionally escalate hostilities.
“Machines (algorithms) should not make life and death decisions,” said Wallach. He is skeptical however human-level AI would emerge in the near-future and proposes that a human voice and meaningful human control must be present at all levels of decision-making. He advocates for international treaties that outright forbid development of autonomous weapons.
Due to a significant grey zone resulting from the dual-use nature of most technologies, AI ethical principles necessitate that red lines be set in many areas. The dual-use nature of facial recognition and voice recognition in combination with large databases of personal information should be clearly limited. While each country or region will find its own approaches to adopting AI and ensuring its use is ethical, the rights of individual civilians should be respected.  While China, whose view of individual rights differs from that of the West, it is, nevertheless, a signatory to the International Declaration of Human Rights.
Wallach also identified future paths of inquiry that need to be addressed, including the design of moral machines, systems that are sensitive to ethical considerations and factor them into their choices and actions. He outlined the evolution of moral machines from ‘operational morality’ to ‘functional morality,’ and eventually full moral agency.  Moral machines with significant capabilities to functionally evaluate choices remain a future possibility, but one that might afford many new applications for deployment.
Finally, Wallach discussed AI governance, by which he means an array of mechanisms for ethical/legal oversight. Actual government regulations should be the last source of oversight and limited to AI activities whose violation requires enforcement.  What is needed is an adaptive and agile institutionalized structure for monitoring developments, flagging gaps, and searching for means to address those gaps from a broad array of available mechanisms including: industry standards, corporate self-governance, laboratory practices and procedures, and technological solutions where feasible.
As such, considering the increasing speed of AI development Wallach, proposed a new model for the governance of emerging technologies, developed with Gary Marchant, Director of the Center for Law and Innovation at Arizona State University, called “Governance Coordinating Committees (GCCs).” GCCs would coordinate the activities of the various stakeholders, monitor developments, note best practices and which institutions are taking responsibility for the various concerns, and flag gaps.
In his concluding remarks, Wallach stressed the need for any governance regulatory body to be, above all else, credible and trustworthy. Referencing his role in establishing the first “International Congress on the Governance of Ethical AI”, he invited the audience to join that gathering when it convenes in April 2020.
He finished by imploring the audience to consider AI within a long-term framework, and to recognize that any “short-term gains could be far-outweighed by longer-term costs.” The address ended by expressing the hope that China will be a leader in properly evaluating the ethical implications of AI and in the prioritization of values that serve AI for all.

No comments:

Post a Comment