ethics Archives - Grit Daily News https://gritdaily.com The Premier Startup News Hub. Sun, 26 Jun 2022 22:38:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.1 https://gritdaily.com/wp-content/uploads/2021/07/GD-favicon-150x150.png ethics Archives - Grit Daily News https://gritdaily.com 32 32 Crypto Ethics: The Math of Socially Responsible Investing in Blockchain & the Metaverse https://gritdaily.com/the-math-of-socially-responsible-investing-in-blockchain-the-metaverse/ https://gritdaily.com/the-math-of-socially-responsible-investing-in-blockchain-the-metaverse/#respond Sun, 26 Jun 2022 10:30:00 +0000 https://gritdaily.com/?p=89168 Disruptive technology always comes with ethical considerations, especially when it comes to the tech industry. The internet, Artificial Intelligence, social media, Peer-to-peer platforms, streaming services, and now, blockchain technology. With […]

The post Crypto Ethics: The Math of Socially Responsible Investing in Blockchain & the Metaverse appeared first on Grit Daily News.

]]>
Disruptive technology always comes with ethical considerations, especially when it comes to the tech industry. The internet, Artificial Intelligence, social media, Peer-to-peer platforms, streaming services, and now, blockchain technology. With blockchain technology’s role becoming increasingly important in today’s world, more investors are worried about socially responsible investing.

Concerns around the ethical implications of blockchain technology have been around since its early days. However, as the technology gained relevance, it would eventually become one of the major sources of both positive and negative criticism. Probably the biggest criticism that blockchain technology has had to face was when the New York Times published a piece on Bitcoin’s environmental impact.

The piece, titled “In Coinbase’s Rise, a Reminder: Cryptocurrencies Use Lots of Energy”, brought further attention to existing concerns on Proof-of-work’s energy consumption. Many articles and columns would show up over the next few days, with companies like Square and Citi weighing in. While the topic of Bitcoin’s use case is certainly not in the spotlight nowadays, it remains relevant.

More recently, Non-Fungible Tokens have also risen to prominence as celebrities and brands around the world started using and advocating them. During the NFT craze, thousands of people joined the discussion on how ethical NFTs really were. While supporters defended their potential use cases and their role in democratizing art, detractors pointed at the financial implications of speculation around them and their hypocrisy.

Debate on the ethics of new technologies is nothing new. The International Journal of Ethics published by The University of Chicago Press was already publishing about the topic back in 1923. In an article titled “Some Ethical Consequences of the Industrial Revolution”, Austin Freeman referred to the industrial revolution by saying:

“This ethical atrophy represents the subsidence to a lower level of essential civilization. For civilization, as we have agreed, is based upon the recognition by man of his duty towards his neighbour; of which none can be more obvious than that of honesty and fair dealing.”

Today, most of us don’t think of the technical revolution as a negative but quite the opposite. Just like that, most criticism toward NFT, blockchain, and crypto, is more about their current status… Not about the technology itself. When it comes to investing in a socially responsible manner, it is not about investing in crypto or not, but the how.

The “The Math of Socially Responsible Investing in Blockchain & the Metaverse” panel saw experts discuss this topic as part of Grit Daily House during Consensus 2022. Leah Callon-Butler, Director at Emfarsis; Evin Cheikosman, Policy Analyst at World Economic Forum; and Nisa Amoils, Managing Partner at A100x Ventures, took to the stage to share their insights, opinions, and experience with the attendees.

Moderated by Linqto’s Chief Strategy Officer Karim Nurani, panelists discussed topics such as environmental concerns around blockchain, the regulation of fintech, and the role of women in developing countries. If you want to know what these experts have to say, you can watch the entire panel in the video below. You can also find our other panels on Grit Daily’s official YouTube Channel!

The post Crypto Ethics: The Math of Socially Responsible Investing in Blockchain & the Metaverse appeared first on Grit Daily News.

]]>
https://gritdaily.com/the-math-of-socially-responsible-investing-in-blockchain-the-metaverse/feed/ 0
Tech Startups Have an Ethical and Reputational Edge Over Big Tech https://gritdaily.com/tech-startups-have-an-ethical-and-reputational-edge-over-big-tech/ https://gritdaily.com/tech-startups-have-an-ethical-and-reputational-edge-over-big-tech/#respond Tue, 21 Sep 2021 09:25:00 +0000 https://gritdaily.com/?p=75239 Earlier this year the Head of Google Research, Jeff Dean, conceded that his employer had taken a “reputational hit” after they fired Timnit Gebru and Margaret Mitchell, the (former) co-leaders […]

The post Tech Startups Have an Ethical and Reputational Edge Over Big Tech appeared first on Grit Daily News.

]]>
Earlier this year the Head of Google Research, Jeff Dean, conceded that his employer had taken a “reputational hit” after they fired Timnit Gebru and Margaret Mitchell, the (former) co-leaders of Google’s Ethical AI Team. The backlash continued as more details of the story are revealed. WIRED magazine provided an in-depth look at not only the firings, but also a surrounding culture ridden with (allegations of) racism, sexism, and territorial cliques.

Google is not the only tech company undergoing a loss of trust among its employees and consumers. Facebook and Amazon regularly suffer similar fates. All three are routinely criticized for misappropriating the data they collect and who they share it with, creating “filter bubbles” without their users knowing it, and producing discriminatory AI algorithms. All of this happens against the backdrop of younger generations putting their money where their values are while the CEOs of these companies testify before the U.S. Congress. Indeed, on June 15, 2021, in a rare moment of bipartisanship, the Senate confirmed Lina Khan in a 69-28 vote to lead the Federal Trade Commission. Khan is a leading advocate for greater enforcement of antitrust and consumer protection laws against big tech.

Why can’t big tech solve their ethical problems?

Surely it would be better for them not to play public relations defense every day while bleeding consumer trust and fending off regulatory investigations. Why hasn’t Facebook’s oversight board saved them from avoiding embarrassments instead of creating them? Why did Google’s AI ethics board get dissolved in less than a week after its formation was announced? How did Amazon not know that having their drivers pee in bottles due to a lack of breaks is both ethically and reputationally (not to mention aesthetically) odious?

Two reasons why these companies find it so difficult to be better

The first is an issue of sheer size. Turning around a large ship is difficult, even when the desire is there. Think about how much time and resources have to go into righting the ship: creating a culture and infrastructure where these issues are taken seriously (and so built into product development, deployment, quality assurance, etc.), assigning ethics-related responsibilities to existing and newly created roles, ensuring that financial compensation packages are aligned with the ethical goals of the company, and so on. It’s a big lift.

The second reason is that their respective business models incentivize (if not require) ethical breaches. Facebook, for instance, is driven by its ad revenue, which requires that they collect massive troves of data about their users, resulting in violations of privacy. They also need to keep people on their platform for as long as possible, leading to the kinds of manipulative technologies detailed in the recent documentary, “The Social Dilemma”.

Tech startups can and should punch harder than their behemoth competitors

Startups are small ships. So long as their founders and senior leaders take issues like data privacy and AI ethics seriously, they can transmit that to the team as a whole and build it into their products, which number in the single digits. It’s also easy for them to build their ethical credentials into their marketing campaigns and their sales pitches to their potential (enterprise) clients who need to work with companies they can trust not to mar their own reputations.

Their business models are also far more flexible than those of big tech. For instance, rather than adopt an ad model and all the ethical troubles it entails, a subscription model is a straight cash-for-service transaction and doesn’t require exploiting user data. In fact, it even opens the possibility of financially compensating users for their data, which stands to benefit users and businesses alike. Facebook can’t do that without taking on board a tremendous amount of risk.

Startups are beginning to take advantage of this ethics-first approach

Google has seen competitors touting their ethical credentials vis-a-vis respecting privacy, most notably Duck Duck Go and Neeva. Another example is the recently announced Bizconnect, a global B2B search engine. Google’s practice of having companies bid for sponsored keywords means that massive corporations routinely purchase highly coveted top spots in search results, leaving smaller businesses pushed down the search results page. On Google’s model, the rich get richer in an unfair playing field. On Bizconnect’s no-bid model, everyone pays the same and ranking in search results among sponsored keywords rotates in a carousel fashion, giving every company – large and small alike – an equal opportunity to be first, second, and third in the search results. (Full disclosure: I serve on the advisory board of Bizconnect).

Startups should think about how they can turn the bad news for big tech into good news for themselves, their users and customers, and society as a whole. They should think about how to get their ethical house in order early on, and in a scalable way.

Start by not collecting as much user data as possible

More specifically, they should collect what is needed and not more, and be transparent with users about why their data is being collected.

Sometimes ethical problems only occur at scale. YouTube serving one person a video containing disinformation about the illegitimacy of an election is not so bad. Serving it to hundreds of millions is a big ethical problem. Startups should think about what their brand’s ethical characteristics look like if they’re wildly successful; that will improve not only how they think about their products but also how they think about their business model.

Most importantly, startups should stop thinking of their users as “users”. Instead, they should think about them as people with whom they have a relationship. They should ask themselves, “how can we operate in a way that will justify the people we serve as thinking of us as trustworthy”? That’s not the same thing as ‘how can we cause our users to trust us?”. Part of this should involve talking to people outside their immediate circles about their business model and practices. It’s particularly important that those conversations occur with people who do not stand to benefit if their startup is successful. Financial interests can blind even the best of us, and before you know it, like Google, you quietly drop your commitment to “don’t be evil.”

The post Tech Startups Have an Ethical and Reputational Edge Over Big Tech appeared first on Grit Daily News.

]]>
https://gritdaily.com/tech-startups-have-an-ethical-and-reputational-edge-over-big-tech/feed/ 0
Where is the accountability for AI ethics gatekeepers? https://gritdaily.com/where-is-the-accountability-for-ai-ethics-gatekeepers/ https://gritdaily.com/where-is-the-accountability-for-ai-ethics-gatekeepers/#respond Tue, 15 Sep 2020 15:45:51 +0000 https://gritdaily.com/?p=51779 Elite institutions, the self-appointed arbiters of ethics are guilty of racism and unethical behavior but have zero accountability.  In July 2020, MIT took a frequently cited and widely used dataset […]

The post Where is the accountability for AI ethics gatekeepers? appeared first on Grit Daily News.

]]>
Elite institutions, the self-appointed arbiters of ethics are guilty of racism and unethical behavior but have zero accountability. 

In July 2020, MIT took a frequently cited and widely used dataset offline when two researchers found that the ‘80 Million Tiny Images’ dataset used racist, misogynistic terms to describe images of Black and Asian people. 

According to The Register, Vinay Prabhu, a data scientist of Indian origin working at a startup in California, and Abeba Birhane, an Ethiopian PhD candidate at University College Dublin, who made the discovery that thousands of images in the MIT database were “labeled with racist slurs for Black and Asian people, and derogatory terms used to describe women.” This problematic dataset was created back in 2008 and if left unchecked, it would have continued to spawn biased algorithms and introduce prejudice into AI models that used it as training dataset. 

This incident also highlights a pervasive tendency in this space to put the onus of solving ethical problems created by questionable technologies back on the marginalized groups negatively impacted by them. IBM’s recent decision to exit the Facial Recognition industry, followed by similar measures by other tech giants, was in no small part due to the foundational work of Timnit Gebru, Joy Buolamwini, and other Black women scholars. These are many instances where Black women and POCs have led the way in holding the techno-elites accountable for these ethical missteps. 

Last year, Gizmodo reported that ImageNet also removed 600,000 photos from its system after an art project called ImageNet Roulette demonstrated systemic bias in the dataset. Imagenet is the brainchild of Dr. Fei Fei Li at Stanford University and the work product of ghost workers at Mechanical Turk, Amazon’s infamous on-demand micro-task platform. Authors Mary L. Gray and Siddharth Suri in their book, “Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass” describe a global underclass of invisible workers who make AI seem “smart” while making less than legal minimum wage and who can be fired at will.

As a society, we too often use elite status as a substitute for ethical practice. In a society that is unethical, success and corresponding attainment of status can hardly be assumed to correlate with anything amounting to ethical behavior. MIT is the latest in a growing list of elite universities who have positioned themselves as experts and arbiters of ethical AI, while glossing over their own ethical lapses without ever being held accountable. 

Whose Ethics are These? 

Given the long history of prejudice within elite institutions, and the degree to which they have continuously served to uphold systemic oppression, it’s hardly surprising that they have been implicated in or are at the center of a wave of ethical and racist scandals. 

In March 2019, Stanford launched the Institute for Human-Centered AI with an advisory council glittering with Silicon Valley’s brightest names, a noble objective of “to learn, build, invent and scale with purpose, intention and a human-centered approach,” and an ambitious fundraising goal of over $1 billion. 

This new institute kicked off with glowing media and industry reviews, when someone noticed a glaring omission. Chad Loder pointed out that the 121 faculty members listed were overwhelmingly white and male, and not one was Black

Rather than acknowledging the existence of algorithmic racism as a consequence of anti-Blackness at the elite universities that receive much of the funding and investment for computer science education and innovation, or the racism at tech companies that focus their college recruitment at these schools, we act as though these technological outcomes are somehow separate from the environments in which technology is built.

Stanford University by its own admission is a $6.8 billion enterprise and has a $27.7 billion endowment fund with 79 percent of the endowment restricted by donors for a specific purpose. After being at the center of the college admissions bribery scandal last year, it was again in the hot seat recently because of its callous response to the global pandemic, which has left many alumni disappointed

MIT and Stanford are not alone in their inability to confront their structural racism and classism. Another elite university that has also been the recipient of generous donations from ethically problematic sources is the venerated University of Oxford. 

Back in 2018, U.S. billionaire Stephen Schwarzman, founder of Blackstone finance group, endowed Oxford with $188M (equivalent of £150M) to establish an AI Ethics Institute. The newly minted Ethics institute sits within the Humanities Center with the intent to “bring together academics from across the university to study the ethical implications of AI.” Given Blackstone Group’s well-documented ethical misdeeds, this funding source was of dubious provenance at best.

Schwarzman also donated $350M to MIT for AI research but the decision to name a new computing center at the school after him sparked an outcry by faculty, students mainly because of his role as ex-advisor and vocal support for President Donald Trump, who has been criticized for his overtures to white supremacists and embrace of racist policies. 

Endowments are an insidious way for wealthy benefactors to exert influence on universities, guide their research including policy proposals, and it is not realistic to expect donors to fund any academic initiatives to reform a system that directly or indirectly benefits them.

This wasn’t the first high-profile donor scandal for MIT either. It had also accepted funding from the late Jeffrey Epstein, notorious sex offender who was arrested for federal sex trafficking in 2019. The MIT-Epstein reveal led to public disavowals and resignations by leading researchers like Ethan Zuckerman, who stated publicly on his blog, “the work my group does focuses on social justice and on the inclusion of marginalized individuals and points of view. It’s hard to do that work with a straight face in a place that violated its own values so clearly in working with Epstein and in disguising that relationship.” 

Evgeny Morozov, a visiting scholar at Stanford University, in a scathing indictment called it “the prostitution of intellectual activity” and demanded that MIT shut down the Media Lab, disband Ted Talks, and refuse tech billionaires’ money. He went on to say, “This, however, is not only a story of individuals gone rogue. The ugly collective picture of the techno-elites that emerges from the Epstein scandal reveals them as a bunch of morally bankrupt opportunists.” 

We have a reasonable expectation for elite schools to behave ethically and not use their enormous privilege to whitewash their own and the sins of their wealthy donors. It is also not entirely outrageous to require them to use their enormous endowments during times of unprecedented crisis to support marginalized groups, especially those who have been historically left out of the whitewashed elite circles, rather than some billionaire’s pet project.

It’s not enough to stop looking to institutions that thrive and profit off deeply unequal, fundamentally racist systems to act as experts in ethical AI, we must also move beyond excusing unethical behavior simply because it is linked to a wealthy, successful institution.

By shifting power to these institutions and away from marginalized groups, we are implicitly condoning and fueling the same unethical behaviors that we supposedly oppose. Unless we fully confront and address racial prejudice within the institutions responsible for much of the research and development of AI and our own role in enabling it, our quest for ethical and responsible AI will continue to fall short.

Co-author:

Ian Moura is a researcher with an academic background in cognitive psychology and Human-Computer Interactions (HCI). His research interests include autism, disability, social policy, and algorithmic bias.

The post Where is the accountability for AI ethics gatekeepers? appeared first on Grit Daily News.

]]>
https://gritdaily.com/where-is-the-accountability-for-ai-ethics-gatekeepers/feed/ 0
Is it Ethical To Try a DNA Testing Kit If Your Family Is Against It? https://gritdaily.com/dna-testing-kit/ https://gritdaily.com/dna-testing-kit/#respond Mon, 27 Apr 2020 22:57:37 +0000 https://gritdaily.com/?p=39298 A couple of years ago, I bought a DNA testing kit to find out more about myself and where I come from. It told me pretty much exactly what I […]

The post Is it Ethical To Try a DNA Testing Kit If Your Family Is Against It? appeared first on Grit Daily News.

]]>
A couple of years ago, I bought a DNA testing kit to find out more about myself and where I come from. It told me pretty much exactly what I thought it would. My Nona was not exactly telling the truth when she said we had Native American ancestry. I’m mostly of European descent, with a touch of Middle Eastern from my paternal grandma. It was a fun little experiment, and it made me feel connected with my own history and who I am.

My brother, however, was not quite as pleased with my DNA testing kit adventure. My little brother is an intensely private person, bordering almost on paranoia. He has no social media and gave the dinner table an elaborate lecture when he came home from college to find that our father put a Google Home in the kitchen. When he found out I had voluntarily given my DNA, approximately 50% of which he shares, to a biotechnology company he was less than thrilled.

Is it Crazy to Be Concerned?

His concerns in this case, however, are not entirely unfounded. 23andMe has faced controversy about how it handles personal data since its inception. People have a right to be concerned. We live in a world where personal privacy is a constant hot button topic and it seems like every new technology leaves us potentially at risk.

At-home DNA testing kits came to the forefront of ethical discussions when one of these online genealogy services led to the capture of the Golden State Killer, Joseph James DeAngelo. Officials took DNA from the crime scenes and compared it to a relative of the suspect’s DNA that they obtained from one of these DNA testing companies. 23andMe and other major companies publicly denied that they had any involvement in the capture of the Golden State Killer, but it raised questions and concerns about what your DNA can really be used for once it’s handed over to a company.

Now, it’s easy to say as long as you don’t murder anyone, it’ll be fine. Don’t commit crimes and leave your DNA at crime scenes and there’s nothing to worry about. However, it’s not that simple.

Data and DNA Testing

First of all, DNA can be misleading. Just because your DNA is found at a crime scene doesn’t mean you committed a crime, and this could potentially spell trouble for anyone who is falsely accused. According to The Innocence Project, about 20,000 people currently incarcerated are falsely convicted.

However, that’s not really the major concern here. 20,000 is not that big of a number when you consider the entire US population. The much larger concern should be how these companies use the data extrapolated from these tests.

A DNA testing kit doesn’t just tell you about your ancestry. Many of them often test for genetic indicators related to various medical conditions. It’s really an issue of privacy. How many companies do you want to have access to your personal genetic information? How will those companies use that information to try to sell you something? Could it potentially affect insurance?

This data may very well just be used for medical research, which is what 23andMe claims. However, if you read the fine print, 23andMe reserves the right to use your data to potentially try to sell you products or services related to your health. And the company has confirmed in the past that they will share data with third parties. Who knows what happens to your data from there.

Are You Putting Your Family At Risk

So how much of a concern should this be for family members? The short answer is that I wouldn’t worry about it too much. The chances of your DNA testing impacting your family is pretty slim. The long answer is that it depends on the family. Could your use of a home DNA kit potentially lead to his or her arrest? Do they share genetic conditions with you? Are they extra worried about privacy concerns? If all three of those questions are a “no” then it’s all good. But if any of those questions is a yes, proceed with caution.

If I had to do it all over again knowing what I know now, I probably wouldn’t have taken the test. I do not suspect anyone in my family of leaving their DNA at crime scenes, but out of respect for our collective privacy, I just might refrain.

The post Is it Ethical To Try a DNA Testing Kit If Your Family Is Against It? appeared first on Grit Daily News.

]]>
https://gritdaily.com/dna-testing-kit/feed/ 0
This organization wants regulations to make technology more human https://gritdaily.com/this-organization-wants-regulations-to-make-technology-more-human/ https://gritdaily.com/this-organization-wants-regulations-to-make-technology-more-human/#respond Tue, 17 Dec 2019 05:56:43 +0000 https://gritdaily.com/?p=22662 In 2018, he co-founded the Center for Humane Technology, which is a collection of "leaders in technology, humanity, mindfulness, philosophy, and education," according to their website.

The post This organization wants regulations to make technology more human appeared first on Grit Daily News.

]]>
One San Fransisco-based organization is looking to promote the development of a new measurement of success for big technology companies. 

Instead of looking at how much time people spend and trying to get them to spend more, Tristan Harris wants companies to make decisions based on what real-world benefit they create for the user, especially interpersonal connection. 

The Center for Humane Technology is a nonprofit started by Harris, a former Google ethicist, Aza Raskin and Randima Fernando. 

In a TED talk, Harris spoke about the company CouchSurfing as an example. He said that the company measures their success in the number of positive hours of interaction people say they had. They then subtract the time they spent on the website to find that person to interact with.

“Can you imagine how inspiring it would be to come to work everyday and measure your success in the actual net new contribution of hours in people’s lives that are positive that would have never existed if you didn’t do what you were about to do at work today,” Harris said in his TED talk

Harris’ goal is for companies to help people spend their time well.

In his talk, Harris interest lies in smarter technology to benefit people by giving them choices that allow them to have better connections with people, not just more connections.

He also doesn’t believe in fully turning away from technology, but instead he wants to change it.  

The building blocks

Harris’s ideas were first heard in 2013, but the past two years have been more concentrated after the founding of the organization.

The Center for Humane Technology accomplishes their goals by combatting what they call “human downgrading” in several ways. They are appealing to the the public, but they are also formally appealing to legislators.

In June, Harris spoke to a sub-committee of the Committee on Commerce, Science, and Transportation. He implored them to begin the process of putting regulations into place that temper the level of tactics that companies use to keep people engaged with the platform without limitations.

As a design ethicist at Google in 2013, Harris first put his ideas on paper — or rather slides. He created a slide presentation for Google employees that outlined what he saw as the ethical problems of technology today.

He said in his statement to some members of the senate that he tried to see if the problem could be fixed from the inside while still with Google but concluded that it could not.

His TED talks and interview on 60 minutes in 2016 sparked an interest in what he had to say from the public and not just his colleagues.

These efforts sparked the Time Well Spent movement.

In 2018, he co-founded the Center for Humane Technology, which is a collection of “leaders in technology, humanity, mindfulness, philosophy, and education,” according to their website.

That same year both Apple and Google launched programs that showed users how much time they are spending on their phones and what they are spending it doing.

This year, in addition to his appeal to legislators he has also written an opinion article in the New York Times entitled, “Our brains are no match for technology.”

The current system

The organization looks at six key areas that they highlight as important in the human-device interaction. Those are digital addiction, mental health, breakdown of truth, polarization (especially political), political manipulation and superficiality.

The organization says that these are all side-effects of the current technology landscape and the interactions that technology has with us.

One of the ways Harris said companies keep our attention is notifications.

He said in a TED talk that companies “plan” our day with the interruptions that they introduce in our lives in the form of notifications and emails. Harris used the example of Facebook saying you were tagged in a photo.

“I’m not just going to click ‘see photo’ what I’m actually going to do is spend the next twenty minutes,” he said.

He said in this way Facebook is planning an interruption into your day.

“The worst part is is that I know that this is what is going to happen and even knowing that that’s what is going to happen doesn’t stop me from doing it again the next time,” he said.

The post This organization wants regulations to make technology more human appeared first on Grit Daily News.

]]>
https://gritdaily.com/this-organization-wants-regulations-to-make-technology-more-human/feed/ 0
Analyzing the Pros and Cons of Cashless Retail https://gritdaily.com/cashless-retail/ https://gritdaily.com/cashless-retail/#respond Sat, 16 Nov 2019 12:34:44 +0000 https://gritdaily.com/?p=19884 Cashless retail stores are often in the headlines these days. As the name would suggest, they allow people to go into a store and buy things without using cash. Typically, […]

The post Analyzing the Pros and Cons of Cashless Retail appeared first on Grit Daily News.

]]>
Cashless retail stores are often in the headlines these days. As the name would suggest, they allow people to go into a store and buy things without using cash. Typically, shoppers have their payment details stored in an app, and a card on file automatically gets charged for their items when they leave the store.

These high-tech shops are getting more popular, especially in cities. They allow people to buy what they need or want faster than if they waited in lines for cashiers to serve them.

It might be convenient for some shoppers to buy things without carrying around cash, but critics weigh in and say this trend poses ethical questions that must be taken seriously as the technology continues to roll out in more places.

The Cashless Economy Gives More Power to Fewer Companies

The move toward a cashless society naturally makes people more dependent on credit and debit cards. However, that shift is not necessarily a wholly good one. Only four companies are associated with more than half of the credit cards issued in the U.S. Retailers collectively must pay billions in processing fees to the respective card brands each year if they accept their payment methods.

Some big retailers like Amazon and Walmart have the leverage to negotiate with card providers and get them to lower their processing fees. Smaller businesses, on the other hand, aren’t influential enough to do the same. If the cashless retail trend continues to accelerate, those enterprises could become trapped in a system that keeps the large banks and card companies prosperous but harms retailers who are further down the system’s chain.

An improvement could happen if card companies agree to lower their fees across the board, but that’s not likely to happen since enough retailers decide to tolerate the processing charges.

A Cashless Economy Puts Lower-Income Individuals at a Greater Disadvantage

People in low-income brackets are less likely than those who earn more to have bank accounts or credit cards. Many may feel that it’s not worthwhile to even try to get such financial resources — and if they do attempt to, their financial circumstances may deem them ineligible. Critics believe that the push toward a cashless society could broaden the inequalities faced by low-income individuals.

Some retailers refuse to serve people who have only cash, which means certain people can’t buy from certain stores. Such an obstacle could increase those individuals’ perception that they are “less than” compared to people who can buy things without cash.

The banking sector needs to address how to make their products and services more accessible to low-income families.

Some primarily cashless retail stores started accepting cash too, albeit mainly due to regulatory push back in respective states. The ability to pay with cash when desired keeps those stores accessible to everyone who wants to use them.

Alternatively, cashless retailers should ensure that they accept prepaid debit cards. They could even sell the cards on-site as an additional convenience and income generator. These cards work like gift cards and can help bridge the divide for people who don’t have bank accounts.

Protests From Citizens, Action From Politicians

Because of the reasons mentioned, along with several others, many people are against cashless stores. The associated outcry has led to lawmakers banning cashless stores in places like New Jersey, San Francisco, and Philadelphia.

The protests extend internationally, too. Hundreds of concerned residents showed up to protest a cashless retail store in France. The location, operated by the Casino brand, opened for eight hours on a Sunday with no employees in sight. Critics are worried because they say people have enough chances to buy what they need without offering this new arrangement that doesn’t require human labor.

Experts who give their insights on the future of retail know that major companies still find it worthwhile to invest in both digital and physical stores. These cashless stores combine digital and physical elements to serve people more efficiently, but not everyone is on board with reducing or eliminating the human workforce. They worry about whether these innovative stores could negatively affect employment rates.

PropTech professionals, city planners, and similar individuals should be cautious with the approach of opening a store that accepts only cash. Taking such an action could be very expensive and cause the store’s representatives to eventually change their business model if cash-free stores become illegal in a city, state or country.

Cashless Stores Eliminate the Hassle of Handling Money

Despite the ethical issues with cashless stores, people should not overlook the benefits. Cashless transactions are faster to process because they don’t require manual counting. When workers are tired at the end of the day or dealing with more than one thing simultaneously, they may make mistakes while verifying the amount of money in the till. As a result, the store’s records can become inaccurate.

Plus, cashless retail outlets don’t make employees face the risks they might encounter when handling money. For example, if stores do not keep cash at all or handle cash transactions only occasionally, robbers may not target them.

Some stores, like 24-hour convenience shops, are favorites for thieves. They often plan the robberies to happen at times when the stores are not crowded and might have just one worker taking care of all the responsibilities. If a would-be robber sees a sign that says a store doesn’t accept cash, they’ll likely move on. Cashless stores have lots of sensors and cameras to discourage people from stealing things, too.

Cashless Options Give Convenience to Customers

Buying something from a cashless store eliminates the possibility of credit card mishaps. For example, a person might use a card to pay for something and not put it firmly into a wallet after the transaction. Then, if they lose their card, it can take weeks to replace. If payment details exist in a cashless store’s app, people pay seamlessly without keeping track of physical methods.

Opportunities also exist for people to get rewarded for choosing to pay without cash. Oleg Moskalensky is the owner and CTO of Productive Computer Systems (PCS). The company developed several apps that allow restaurants to run cashless businesses. One of them, called My10Dishes, enables people to order food from mobile apps by using a browser bookmark or QR code without downloading or installing anything. The app digitally stores their payment details.

There’s also something called Frequent Dividends, available as a standalone solution or My10Dishes integration. If people join Frequent Dividends — FreD for short — they earn credits for rewards. Moskalensky explains, “My10Dishes will automatically give them appropriate credits. [A]nd if they earn a reward, [the app] discount[s] the total by the reward amount.”

Moskalensky says this loyalty system supports the cashless retail trend. He clarifies, “[The app] becomes part of the cashless experience. All customers do is order via My10Dishes, and they automatically get discounts over time.”

The Chance to Pay Without Cash Could Make People Buy More

If a person comes into a store and has only $20 in their wallet, they’ll likely be especially careful with what they select so that the total doesn’t exceed what they can spend. Conversely, cashless stores could encourage people to buy things they otherwise wouldn’t. Customers can indulge in impulse purchases or choose to add $5 to their total to donate to a charity campaign, for example.

Cashless Stores Could Modernize Cities

Cashless stores could make a city seem more tech-equipped. That characteristic of a location may attract tech investors at large and help the city’s economy.

A market forecast also indicated that the interactive kiosk sector should grow by five percent during the measured period. The company examined cashless retail when they were pinpointing the reasons behind the growth. Consider if cashless stores want to install kiosks for novelty items. Best Buy did so about a decade ago when it installed vending machines that dispensed electronic gadgets in airports.

More recently, a Philadelphia hotel made headlines by installing a vending machine that doles out mini wine bottles. Since alcohol is an age-restricted product, a human has to verify how old the person is first. Then, they give out a token for the consumer to put into the machine. The buyer gets a chilled wine bottle complete with a gold sipper to make it easier to drink.

Whether cities choose to use kiosks in cashless stores or not, the decision to let people engage in commerce without paper money could make analysts view those cities as pioneering and keeping pace with current needs.

Not a Perfect Solution, But Worth Investigating

Valid ethical concerns surround cashless stores, and it’s crucial to remain mindful of them as these kinds of retailers become more prominent. The ideal way forward is not to stifle the growth of cashless stores but to give people options for times when paying without cash is not feasible or desirable.

The article Analyzing the Pros and Cons of Cashless Retail first appeared on Inno & Tech Today.

The post Analyzing the Pros and Cons of Cashless Retail appeared first on Grit Daily News.

]]>
https://gritdaily.com/cashless-retail/feed/ 0