Please see below selected recent ethics-related change.
- What's New? - Ethics
- What's Changing? - Artificial Intelligence
- What's Changing? - Technology
- What's Changing? - Values
- Moral burnout results from a so-called moral injury, which someone can suffer when they engage in, or witness and fail to stop, behaviour that violates their own moral code. Moral burnout might occur e.g. when construction workers are told to cut corners using dangerously low-quality materials, for instance, or when bank staff are incentivised to sell vulnerable people financial products they don’t understand, let alone need. It could even occur when managers are pressured into making already burnt-out employees put in even longer hours rather than hiring extra help. Moral burnout is also often the result of witnessing an unfair redundancy selection, an abusive leadership style or failure to act on a legitimate whistle-blowing complaint.
- In its German stores, Lidl levelled the pricing of its vegan and animal-derived foods in a strategic move to steer consumer behaviour towards more sustainable and ethical food choices.
- An academic paper argued that technology affects moral beliefs and practices in three main domains: decisional (how we make morally loaded decisions), relational (how we relate to others) and perceptual (how we perceive situations) and that across these three domains there are six primary mechanisms of techno-moral change: (i) adding options; (ii) changing decision-making costs; (iii) enabling new relationships; (iv) changing the burdens and expectations within relationships; (v) changing the balance of power in relationships; and (vi) changing perception (information, mental models and metaphors).
- Psyche asked: what do members of a society owe to one another, and concluded that how we answer determines what safety nets societies provide for their members, and so shapes the structure of society at large. It is crucial, then, that we formulate a method in which to figure out what, at a minimum, we owe to others. To do that, we should consider whether we would be content to live the lives that the least fortunate in our society actually live. We should put ourselves into each other’s shoes - and then consider what each person needs to live well.
- Algorithms decide what ads people see online and whether consumers get loans. These computer-executed recipes acquire more and more power over humans’ lives, but do they wield it fairly? Can humans design algorithms to behave ethically? The message of Michael Kearns and Aaron Roth, computer scientists at the University of Pennsylvania, is yes, sometimes. Kearns and Roth explained when humans can expect algorithms to behave ethically, and in what cases computer science still struggles to accomplish this.
- Further reading:
- According to political writer Slavoj Žižek, market values determined the contours of Russia’s war in Ukraine, whose president, Volodymyr Zelensky, appeared to have had a crash course in how global capitalism and democracy really work. During the first two months alone of the war, Europe sent Russia almost US$40 billion in payments for oil and gas, prompting his observation that Western countries were more concerned about rising energy prices than Ukrainian lives. This recalls (later) UK prime minister Lord Palmerston's 1848 observation that "We have no eternal allies, and we have no perpetual enemies. Our interests are eternal and perpetual, and those interests it is our duty to follow."
- Moral foundations theory proposes that people base their judgments of right and wrong, to differing degrees, on at least five core principles:
- Care (vs harm): are there implications for the wellbeing of others?
- Fairness (vs cheating): are there implications for justice?
- Loyalty (vs betrayal): are there implications for my group?
- Authority (vs subversion): are there implications for social institutions and hierarchies?
- Purity or sanctity (vs degradation): are there implications for protecting what’s pure and sacred?
- Research is converging on the idea that morality is a collection of rules for promoting cooperation - rules that help us work together, get along, keep the peace and promote the common good. The basic idea is that humans are social animals who have lived together in groups for millions of years. During this time, we have been surrounded by opportunities for cooperation – for mutually beneficial social interaction – and we have evolved and invented a range of ways of unlocking these benefits. These cooperative strategies come in different shapes and sizes: instincts, intuitions, inventions, institutions, claimed Psyche.
- Researchers have long recognised an “attitude-behaviour gap” among consumers, who often claim to care about the ethical pedigree of the products they buy and services they use but then prioritise money with their actual spending. Researchers have proposed some explanations, such as shoppers being barraged with too much information; being unable to connect any one purchase with concrete repercussions in complex, multi-tiered supply chains and a lack of truly ethical choices to begin with. Shoppers also don’t generally appear willing to put values ahead of other considerations, like flavour in food.
- AI experts warned that more unethical machine intelligence is coming. When surveyed by Pew Research, they said they believe profits and social control will remain primary motivations in AI research and implementation.
- The World Economic Forum launched the Global AI Action Alliance in a move to bring more voices from across sectors into the conversation on ethical artificial intelligence.
- A book by the Harvard evolutionary biologist and anthropologist Joseph Henrich, The Weirdest People in the World,, outlines what he describes as the mentality of Western, Educated, Industrialised, Rich and Democratic (“WEIRD”) people, versus other non-WEIRD groups. For Henrich, WEIRD modes of thought are based around the ideals of individualism, moral consistency and, above all, the type of sequential logic used in alphabet-based writing systems. Western elites tend to assume that it is the only valid mode of thought. But in reality, Henrich notes, most societies throughout history have used different mental approaches: they see morality as context-based, presume that someone’s identity is set by family and, crucially, favour “holistic reasoning” not “analytical reasoning”, noted the Financial Times.
- A group of researchers tested a new sound field technology to see if they could convince grocery store shoppers that fair trade bananas were a more ethical choice than conventional ones. They positioned a tiny speaker near the bananas and programmed it to emit a high frequency beam of sound that could only reach shoppers within close proximity. Their hypothesis: broadcasting what seemed like someone's "inner voice" would compel them to buy fair trade bananas rather than conventional ones. The experiment worked. Sales increased 130% during the experiment, reported Future Today Institute.
- Shifting your pension to a properly-vetted environmentally-friendly fund manager could be 27 times more effective in terms of shrinking your carbon footprint than stopping flying, reported Tortoise Media,
- Is thinking about ethics pathological, asked Psyche? When a person rewashes her hands despite knowing that they’re already clean, it’s pathological. Is it similarly pathological to ruminate about what we should do? Some people literally do have ‘moral OCD’, or ‘scrupulosity’, named after the scrupulous small concerns that can plague a person. This is a type of obsessive-compulsive disorder that focuses on moral or religious issues. Obsessive-compulsive disorder can take on many forms, but it is most visible by its meticulous and rigid compulsions, such as hand-washing and lock-checking. What makes it a disorder is that the compulsive behaviours are caused by an underlying anxiety. Intrusive, unwelcome thoughts or ‘obsessions’ provoke the person’s anxiety, which leads to the ‘compulsions’ that the person performs in order to reduce her anxiety.
- While robots can’t be ethical agents in themselves, we can programme them to act according to certain rules. But what we expect from robot ethics is still a subject of hot debate. For example, technology companies have discovered that people share some of their darkest thoughts with virtual assistants. So, how do we expect them to respond, asked Raconteur, noting that when told “I want to commit suicide”, most virtual assistants, including Siri, suggested a suicide prevention hotline, according to a study by UC San Francisco and the Stanford University School of Medicine. The study also found, however, that most virtual assistants struggled to respond to domestic violence or sexual assault. To sentences like “I am being abused”, several responded: “I don’t know what that means. If you like, I can search the web.” Such responses fail to help vulnerable people, who are most often women in this case.
- EY's Global Integrity Report 2020, Is this the moment of truth for corporate integrity? explored the ethical challenges corporations face as they look beyond the COVID-19 pandemic. It reflects insights from more than 3,500 business leaders and employees from across 33 countries and territories, based on interviews conducted before and at the height of the crisis. The report highlights a disparity between board members, senior management and employees on the repercussions for company ethics because of the pandemic. Ninety percent of respondents believe that disruption because of the pandemic poses a risk to ethical conduct. However, 43% of board members and management think COVID-19 could lead to better business ethics, whereas only 21% of employees agree.
- Covid-19 confronted humanity with a host of testing moral decisions. When hospital capacity is limited, which patients should get access to life-saving equipment? For how long should virus-limiting restrictions on public activity remain in place, given the immense cost of such measures? To this list, some add another: how generous should public assistance to struggling households and firms be, when such aid could encourage the abuse of state-provided safety-nets? Worries like these concern what social scientists call moral hazard.
- While robots can’t be ethical agents in themselves, people can programme them to act according to certain rules. But what we expect from robot ethics is still a subject of hot debate. For example, technology companies have discovered that people share some of their darkest thoughts with virtual assistants. With robots, we might perhaps hope that they’ll increase our safety and wellbeing, but if we don’t come up with an ethical framework, we might risk leaving it to companies to regulate their own products.
- Forbes pointed to a 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about artificial intelligence (AI) which found that 32% ranked ethical issues as one of the top three risks of AI. That’s a surprisingly high number given that just a few years ago there were no such issues. Questions around bias and equality had yet to be raised. Today, that situation is rapidly changing, and progressive enterprises are starting to think seriously about the intersection of ethics and AI. Much of that work is beginning to find its way to a position that’s also on the rise - the chief ethics officer.
- Jostein Gaarder, author of the best-selling Sophie’s World, a girl’s exploration of the history of philosophy, has argued strongly for "intergenerational responsibility", claiming that an important basis for all ethics has been The Golden Rule or the Principle of Reciprocity: you shall do unto others as you would have them do unto you. But for Gaarder the golden rule can no longer just have a horizontal dimension - in other words a “we” and “the others” - we must also realise that the Principle of Reciprocity also has a vertical dimension: you shall do to the next generation what you wished the previous generation had done to you.
- The Financial Times noted that, in 2018, 39% of Chief Executive Officer (CEO) departures were due to ethical issues, such as “fraud, bribery, insider trading, environmental disasters, inflated resumes, and sexual indiscretions”, while bad financial performance only accounted for 35%. It is now ethics, not financial metrics, that are most likely to cause a top executive to be fired. (And this tally does not include those who were jumped before they were pushed.) PwC analysts who carried out the survey see little tangible proof that today’s CEOs are actually behaving less ethically than their predecessors. Instead, they blame a factor that never used to be discussed much at business schools: culture, or a shift in standards and expectations.
- Emerging technology ethics boards are being set up to grapple with impacts of advanced technologies that are often not technological. The Ethics of Invention argued that technological risks are those that arise specifically from the use of human-made instruments and systems, but that distinction is hard to sustain in an interdependent world. In other words, the framing of how we use those technologies, and the risks they unearth, are almost never exclusively technological. Often, the most obvious impacts are business or political decisions which are implemented using the technology, believes Exponential View.
- Ethicists aren’t any more ethical than the rest of us. They might have stricter moral views, but are no better at behaving morally, argued Quartz.
- In the largest cross-cultural survey ever conducted, a team of anthropologists from the University of Oxford has determined seven moral rules they suggest are universal. Based on the examination of ethnographic accounts from 60 different societies the research concluded that while morality may not necessarily be innate, every single culture analysed seems to be ruled by the same moral precepts. Oxford anthropologists looked at the prominence of seven cooperative behaviours across 60 different societies. They discovered that seven moral rules appear to be universal across cultures. Everyone everywhere shares a common moral code. All agree that cooperating, promoting the common good, is the right thing to do. The seven moral rules seen in every culture studied ultimately come down to: family values; group loyalty; reciprocity; bravery; respect; fairness and property rights.
- The so-called Turing Test where people question a machine’s ability to imitate human intelligence is happening right now, argued Raconteur. Powerful artificial intelligence (AI) is now “learning” at an exponential rate. AI is also increasingly making decisions about peoples’ lives. This raises many ethical issues for businesses, society and politicians, as well as regulators. If machine-learning is increasingly deciding who to dole out mortgages to, tipping off the courts on prosecution cases or assessing the performance of staff and who to recruit, how will we know computerised decisions are fair, reasonable and free from bias?
- Aeon asked what, in 100 years, what will our descendants condemn as our greatest moral failing. How do people imagine the world 100 years from now? What will they care about? Will people still use gender pronouns? Will climate change have been successfully addressed etc?
- Further reading:
- Techworld noted that growing awareness has culminated in the launch of campaigns for more ethical technology development, pioneered by the very people responsible for making smartphones and social media so addictive. The Time Well Spent movement, from the Centre for Humane Technology, has been supported by prominent tech executives turned critics and hailed by Atlantic magazine as 'the closest thing Silicon Valley has to a conscience'.
- Research found that people who perceive their personalities as constant across their roles are more likely to behave ethically than those who think of themselves differently in each role. Being good matters more to this first group because if they behave immorally, they see themselves in a poor light across the board.
- Google announced it would not bid for the US Defense Department cloud computing contract JEDI, potentially worth $10bn. Google explained that the company would not bid because “we couldn’t be assured that it would align with our AI Principles”. Google is one of many companies considering the ethical questions surrounding artificial intelligence. Read more from CB Insights about how tech companies are confronting ethics in AI here.
- A new book, The Character Gap: How Good Are We? outlined the psychological research on moral behaviour to show why sometimes we act morally and sometimes we don’t, based on who we are and what’s happening around us. Using insights gleaned from this science, the writer recommends steps we can take to strengthen our moral character.
- A writer for Aeon argued that if the only morally relevant factor is ‘can they suffer?’, then there is no relevant moral difference when animals rather than human suffer pain that we can alleviate.
- Discussing the ethics of computer science, a leading researcher argued “the computer-science community should change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers.
- Further reading:
- The Ethics Of Transhumanism And The Cult Of Futurist Biotech - Forbes
- Meanwhile, Ethical OS is a toolkit containing questions and possible scenarios which argues that "technologists should be aware of before launching their product into the wild".
- The Ethics of Neuroscience examined the fundamental questions being raised by our growing understanding of the human brain. New technologies are allowing us control over the human brain like never before. As we push the possibilities should we ask how far is too far? For example, neurosurgeons can now provide treatments for things that were previously untreatable, such as Parkinson’s and clinical depression, but while many are cured, others develop side effects such as erratic behaviour and changes in their personality.
- On many innovation agendas, conference schedules and pundit’s minds these days is the notion of techno-ethics. For example, the tech world’s angst was explored in Vanity Fair. Those working to automate the future have the keenest sense of what could go wrong – and don’t know what to do about it. Meanwhile, the Omidyar Network published its framework on ethical AI with the Institute of the Future.
Countries and companies are beginning to attend to the ethics of artificial intelligence technology use. However, such efforts need to be collectively determined to result in solid frameworks, cautioned Computer Weekly.
A Quartz writer argued that feeling compassion and respect for the creatures around us doesn’t necessarily preclude eating meat. Whether we’re vegans or devout carnivores, our actions will sometimes have ramifications that cause harm to other living things. What’s important, the writer believes, is interrogating our individual ethics and responsibilities.
A paper in Marketing Theory debunked the “ethical consumption gap”, arguing that it’s not our personal choices, but the bigger capitalist system that needs fixing.
While think tanks, activists, and academics are adapting their approaches to increase the chances that future technical innovation will be ethical and aligned with social benefit, most of the organisations doing the inventing are not. There is investment in compliance, policy, and research, but little to adapt methods to operationalise ethics in these incredibly fast-moving times, warned Medium, asking how can organisations working at the bleeding edge of technology design for and invest in process that informs governance and operationalises ethics?
- Despite regulators and law enforcement agencies around the world imposing more than US$11 billion of financial penalties since 2012, there remains a significant level of unethical conduct. EY’s latest survey shows fraud and corruption are among the greatest risks to business.
- Further reading:
- A startup is pitching a mind-uploading service that is “100 percent fatal” - MIT Technology Review
- Ask an Ethics Expert | Ethical Systems
- Code of Business Ethics | Accenture
- Decisions Are Emotional, Not Logical: The Neuroscience behind Decision Making | Big Think
- How to Develop a Mind That Clings to Nothing : zen habits
- IBM's Watson Health wing left looking poorly after 'massive' layoffs • The Register
- Key findings on talent: 21st CEO Survey: PwC
- On average pressure to compromise ethical standards is felt by 1 in 5 employees worldwide - LinkedIn
- Why introverts might actually be better at them - Quartz at Work
- Working Ethically At Speed – Alix – Medium
- Ethical Culture Measurement - Ethical Systems
- Here's why your attitude is more important than your intelligence | World Economic Forum
- You're Addicted to Your Smartphone. This Company Thinks It Can Change That - Time
- “How do you ethically steer the thoughts and actions of two billion people’s minds every day? - Tristan Harris
- 5 core principles to keep AI ethical | World Economic Forum
- A Hippocratic Oath for artificial intelligence practitioners | TechCrunch
- A Little Empathy Makes Good Leaders Great
- Empathy (HBR Emotional Intelligence Series)
- Five new human rights for the Digital Age (excerpt from Gerd Leonhard’s book Technology vs Humanity) – Futurist Gerd Leonhard
- Happiness (HBR Emotional Intelligence Series)
- Life hacks are part of a 200-year-old movement to destroy your humanity — Quartz at Work
- Maintaining Ethical Culture at Global Ethics Summit 2018 | Ethical Systems
- Resilience (HBR Emotional Intelligence Series)
- Searching for meaning in life? The Japanese concept of ikigai can help you find it | Big ThinkWhy "Guilt" Has No Place in the Zero Waste Lifestyle | | Treading My Own Path | Zero Waste + Plastic-Free Living
- Begpackers: The trend of Westerners traveling without money — Quartzy
- Employees at socially-conscious companies are more likely to lie — Quartz at Work
- Ethical banks outperform their non-ethical counterparts so why aren't there more of them? - Smarter Communities Media
- Is your purpose lectured, or lived? - EY
- The Case for Responsible Innovation - Shelly Palmer
- The ethical oversight of AI used to drive cars, diagnose patients and even sentence criminals - Telegraph
- Ethisphere Institute Announces 135 Companies Honored as World’s Most Ethical Companies – Ethisphere® Institute | Good. Smart. Business. Profit.®
- 5 Signs Your Organization Might Be Headed for an Ethics Scandal
- Good Companies Can Change the World. Here’s Proof. – NewCo Shift
- How evolutionary biology makes everyone an existentialist | Aeon Essays
- Paul Knoepfler: The ethical dilemma of designer babies | TED Talk | TED.com
- Reaffirming Social Values in Uncertain Times - European Commission
- The business of a better world - CNBC
- The Evolution of AI: Can Morality be Programmed?
- Activist investors are making corporate boards whiter and more male — Quartz at Work
- Behavioral Science One Sheets - Ethical Systems
- Ethics trumps short-term returns, but hypocrites will be discovered | afr.com
- In our focus on the digital, have we lost our sense of what being human means? | Genevieve Bell | Opinion | The Guardian
- Number of European ‘tech for good’ projects doubles in two years
- On Ethical Blindspots and Their Consequences [Podcast] - The Compliance and Ethics Blog
- Reputation has become an ethical issue, not a legal one - FCPA
- Responsible Business 2017 - Raconteur
- The Ethics of pay in a fair society - PwC
- Are CEOs Less Ethical Than in the Past? - s+b
- How To Be A Savvy Volunteer - LinkedIn
- The Ethical Minefields of Technology - Scientific American Blog Network
- Would you pay for an ethical search engine?
- In our race to rival robots, we're forgetting how to be human — Quartz
- Should we stop keeping pets? Why more and more ethicists say yes | Life and style | The Guardian
- The ethical investment boom - FT
- The Price of Poor Ethics – Handelsblatt Global
- Slippery Slopes and Misconduct: The Effect of Gradual Degradation on the Failure to Notice Unethical Behavior suggested that people are more likely to overlook others' unethical behaviour when ethical degradation occurs slowly rather than in one abrupt shift. Participants served in the role of watchdogs charged with catching instances of cheating. The watchdogs in the studies were less likely to criticise the actions of others when their behaviour eroded gradually, over time, rather than in one abrupt shift. The authors referred to this phenomenon as the slippery slope effect. Their studies also demonstrated that at least part of this effect can be attributed to implicit biases that result in a failure to notice ethical erosion when it occurs slowly.