Guarantee your career with AZ-500 Questions and Answers and cram

killexams.com AZ-500 Exam Braindumps offers each of you that you have to take the Certified exam. We offer 100% free AZ-500 test questions to download and evaluate. Our Microsoft AZ-500 Exam will give you exam questions with valid answers that reflect the real exam. We at killexams.com are made game plans to draw in you to finish your AZ-500 test with good grades.

Exam Code: AZ-500 Practice exam 2023 by Killexams.com team
AZ-500 Microsoft Azure Security Technologies 2023

Manage identity and access (20-25%)

Configure Azure Active Directory for workloads

 create App Registration

 configure App Registration permission scopes

 manage App Registration permission consent

 configure Multi-Factor Authentication settings

 manage Azure AD directory groups

 manage Azure AD users

 install and configure Azure AD Connect

 configure authentication methods

 implement Conditional Access policies

 configure Azure AD identity protection

Configure Azure AD Privileged Identity Management

 monitor privileged access

 configure Access Reviews

 activate Privileged Identity Management

Configure Azure tenant security

 transfer Azure subscriptions between Azure AD tenants

 manage API access to Azure subscriptions and resources



Implement platform protection (35-40%)

Implement network security

 configure virtual network connectivity

 configure Network Security Groups (NSGs)

 create and configure Azure Firewall

 create and configure Azure Front Door service

 create and configure application security groups

 configure remote access management

 configure baseline

 configure resource firewall

Implement host security

 configure endpoint security within the VM

 configure VM security

 harden VMs in Azure

 configure system updates for VMs in Azure

 configure baseline

Configure container security

 configure network

 configure authentication

 configure container isolation

 configure AKS security

 configure container registry

 implement vulnerability management

Implement Azure Resource management security

 create Azure resource locks

 manage resource group security

 configure Azure policies

 configure custom RBAC roles

 configure subscription and resource permissions



Manage security operations (15-20%)

Configure security services

 configure Azure Monitor

 configure diagnostic logging and log retention

 configure vulnerability scanning

Configure security policies

 configure centralized policy management by using Azure Security Center

 configure Just in Time VM access by using Azure Security Center

Manage security alerts

 create and customize alerts

 review and respond to alerts and recommendations

 configure a playbook for a security event by using Azure Security Center

 investigate escalated security incidents



Secure data and applications (25-30%)
Configure security policies to manage data

 configure data classification

 configure data retention

 configure data sovereignty

Configure security for data infrastructure

 enable database authentication

 enable database auditing

 configure Azure SQL Database Advanced Threat Protection

 configure access control for storage accounts

 configure key management for storage accounts

 configure Azure AD authentication for Azure Storage

 configure Azure AD Domain Services authentication for Azure Files

 create and manage Shared Access Signatures (SAS)

 configure security for HDInsight

 configure security for Cosmos DB

 configure security for Azure Data Lake

Configure encryption for data at rest

 implement Azure SQL Database Always Encrypted

 implement database encryption

 implement Storage Service Encryption

 implement disk encryption

Configure application security

 configure SSL/TLS certs

 configure Azure services to protect web apps

 create an application security baseline

Configure and manage Key Vault

 manage access to Key Vault

 manage permissions to secrets, certificates, and keys

 configure RBAC usage in Azure Key Vault

 manage certificates

 manage secrets

 configure key rotation

Microsoft Azure Security Technologies 2023
Microsoft Technologies book
Killexams : Microsoft Technologies book - BingNews https://killexams.com/pass4sure/exam-detail/AZ-500 Search results Killexams : Microsoft Technologies book - BingNews https://killexams.com/pass4sure/exam-detail/AZ-500 https://killexams.com/exam_list/Microsoft Killexams : A glance into the genius Microsoft brain with these 8 Satya Nadella books recommendations Killexams : ERROR: The request could not be satisfied

Request blocked. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.


Generated by cloudfront (CloudFront)
Request ID: 3a3jfY0ortLSrI1ylcVvhjtYFJzPUQEQ5kYWafDnPcbgqAVJeGNfMg==
Tue, 22 Aug 2023 14:00:00 -0500 text/html https://www.lifestyleasia.com/hk/culture/microsoft-ceo-satya-nadella-books-recommendations-to-read/
Killexams : Book review: Tom Kemp’s ‘Containing Big Tech’ provides a guide for consumers and businesses alike

Tom Kemp’s new book about the dangers of the five Big Tech companies is several books in one volume. Normally, this would not be a great recommendation.

But stick with me here and see if you agree that he has written a very useful, effective and interesting book. Think of it more as a new toolbox that can do double duty for both information technology managers and end users.

Kemp was the founder and chief executive of Centrify, now Delinea, a venture capitalist and a tech policy adviser behind many state-level privacy bills. His book, “Containing Big Tech,” due out Tuesday, is nominally a detailed history on how Microsoft Corp., Google LLC, Facebook, Instagram and WhatsApp owner Meta Platforms Inc., Amazon.com Inc. and Apple Inc. have become the tech powerhouses and near-monopolists with their stranglehold on digital services — and at the same time threatening digital privacy.

The 264-page book is a reference work for consumers who are concerned about what private information is shared by these vendors, and how to take back control over their data. It is also an operating manual for business information technology managers and executives who are looking to comply with privacy regulations and also prevent their own sensitive data from leaking online. And it’s a legislative to-do list for how to fashion better data and privacy protection for our digital future.

Kemp focuses on eight areas of interest, one per chapter. For example, one describes some startling failures at reining in the data broker industry, and another goes into detail about how easily disinformation has prevailed and thrived in the past decade. The author mixes his own experience as a tech entrepreneur, investor and executive with very practical matters.

Each chapter has a section dealing with the issue, then the response of the various tech vendors, and finally a collection of various laws and proposals from both the EU and the U.S. in response. This last section is a sad tale about the lack of legislative forward motion in the U.S. and how the EU has forged ahead with its own laws in this area — only for them to be lightly enforced.

Speaking of legislation, I asked him what he thought about the lack of any progress in that department, especially at the U.S. federal level. “No one is going to do anything to modify Section 230 — all previous efforts have been roundly beaten,” he said. “Eventually, pressure is going to shift to EU, with its new laws that take effect in 2024.”

Those laws, he added, will require online businesses to monitor their platforms for objectionable speech. “These will also give end users the ability to flag content and make the tech vendors to be more transparent,” he said. “Tech platforms will then have to finally respond. I don’t see anything happening in the U.S., nor with any new federal privacy laws enacted.”

Kemp’s unique know-how and the combination of these different perspectives makes for a fascinating read. For example, to test Google’s claims that it has cleaned up its heavy-handed location monitoring, he did some role-playing and set up appointments at local abortion clinics, visited drug stores, and shopped online. His online activity and location data was monitored by Google about every six minutes.

“The real-time nature of this monitoring was impressive,” he said. “Google knows the ads that they served me, the pages I visited, my Android phone notifications and locations. And despite their promises, they were logging all these details about me.”

Even if you have been parsimonious about protecting your privacy, you probably don’t know that Meta’s tracking pixel is used by a third of the world’s most popular websites and is at the heart of numerous privacy lawsuits, especially in Europe. Or what are the exact sequence of steps to tamp down on what the five tech vendors allow consumers to make their activities more private.

Kemp doesn’t pull any punches: He lays blame at the keyboards of these Big Tech companies and our state and federal legislators. “Big Tech’s anticompetitive practices have also significantly contributed to them becoming these giants who act as gatekeepers to our digital economy,” he writes. They certainly have plenty of resources: “The five Big Tech firms have five of the seven largest cash balances of any S&P 500 company in 2022.”

He documents the missteps that the major tech vendors have taken, all in the service of their almighty algorithms and with the aim of increasing engagement, no matter the costs to society, or to its most at-risk members — namely children.

I asked him about the latest crop of studies that were paid for in part by Meta and appeared in various technical journals and the New York Times. “Meta was closely involved in shaping this research and in setting the agenda,” he said. “It wasn’t a neutral body – they framed the context and provided the data.”

Part of the problem, he added, is that the big tech platforms are “talking out of both sides of their mouths. They market their platforms specifically to influence people to buy products from their advertisers. But then their public policy staffs have another message that says they don’t really influence people when bad things happen to them. They certainly haven’t helped the situation via algorithmic amplification of using their services.”

I reminded him that many of the Big Tech trust and safety teams were among the first groups to be fired when the most recent downturn happened. That doesn’t help with things either.

Business managers should get a copy of this book now, especially if they want to stay abreast of the issues he mentions. Readers should also check out his website for post-publication updates, which could be helpful.

Even for those businesses that may have taken some of the privacy-enhancing steps he outlines in one of the book’s appendices, there is still plenty to learn and to explore in “Containing Big Tech.”

Image: Pixabay

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Mon, 21 Aug 2023 07:34:00 -0500 en-US text/html https://siliconangle.com/2023/08/21/book-review-tom-kemps-containing-big-tech-provides-guide-consumers-businesses-alike/
Killexams : Microsoft ends Kinect production, but the technology will live on

What just happened? Microsoft's Kinect device is no longer being produced. You might think that this happened years ago, and you'd be right. The version being killed off in this instance is the Azure Kinect Developer Kit that was introduced in 2019.

It was back in 2017 when Microsoft announced it was halting all production of its flagship motion control device, the Kinect, ending a seven-year lifespan. A year later, Microsoft said it would also stop production of the Xbox Kinect Adapter, the USB accessory that allowed Kinect owners to connect the sensor to an Xbox One S, Xbox One X, or Windows PC.

But a new version of the Kinect arrived in 2019 in the form of the Azure Kinect Developer Kit, a $399 PC peripheral powered by Azure cloud compute and designed for enterprise use and developers who build AI algorithms.

Now, Microsoft has announced that it has decided to end production of the Azure Kinect Developer Kit. The device, which came with a 12-megapixel camera, a time-of-flight depth sensor, and seven microphones, isn't being completely consigned to the history books, though.

Microsoft said the technology will continue to be available through its partner ecosystem, so anyone who might have use for such as thing should be able to find similar offerings sold by third-party partners. The Windows maker recommends the Femto Bolt by Orbbec, which uses the same depth camera module, as an alternative to the Azure Kinect.

The remaining Azure Kinect Developer Kits are available to buy through the end of October or until supplies last. Existing users can continue using them "without disruption," and the Azure Kinect Developer Kit SDK (software development kit) will continue to be available for download.

After first being released for the Xbox 360 in November 2010, the Kinect went on to sell around 35 million units during its lifespan. Microsoft tried to reignite interest by bundling it with the Xbox One when it launched in 2013, but that increased the price of the console. The plan appeared to backfire and the bundles soon stopped.

Tue, 22 Aug 2023 03:55:00 -0500 Rob Thubron en-US text/html https://www.techspot.com/news/99876-microsoft-ends-production-kinect-but-technology-live.html
Killexams : An author says AI is ‘writing’ unauthorized books being sold under her name on Amazon No result found, try new keyword!An author is raising alarms this week after she found new books being sold on Amazon under her name — only she didn’t write them; they appear to have been generated by artificial intelligence. Thu, 10 Aug 2023 02:03:11 -0500 en-us text/html https://www.msn.com/ Killexams : Microsoft's Surface Book 3 passes through FCC

Expected in May, the imminent launch of Microsoft's Surface Book 3 is one step closer as it's now passed through the FCC. Spotted by Windows Latest, the filing once again shows something only called Portable Computing Device, just like when the Surface Go 2 was certified by the FCC.

Other than that, there's not much to tell here. This is just another milestone that demonstrates that Microsoft is close to announcing the product.

As for what the Surface Book 3 will actually be, there are still a lot of questions. We're expecting to see Intel's 10th-generation 'Comet Lake' CPUs, along with newer Nvidia GeForce GTX 16xx graphics. According to a few reports, there will be Quadro options as well. But other than the internals, not much is known about if or how the design will change.

The Surface Book has historically been criticized for its hinge gap and its poor weight balance. Both of these are due to its unique design, with a detachable display. Other than the GPU and one of the two batteries, all of the internals are stored in the screen so it can work when it's detached. Naturally, this results in a heavier tablet portion.

Third-generation Surface products are typically where the product matures though, and typically where kinks are worked out. We've seen this in the Surface Pro 3, Surface 3, and most recently, the Surface Laptop 3. Hopefully, the Surface Book 3 will follow suit.

Tue, 15 Aug 2023 05:00:00 -0500 en text/html https://www.neowin.net/news/microsofts-surface-book-3-passes-through-fcc/
Killexams : Books Microsoft CEO Satya Nadella recommends for greater mind No result found, try new keyword!On his birthday we look back at the book recommendations made by Microsoft CEO Satya Nadella for sharper minds and innovation ... Fri, 18 Aug 2023 14:51:00 -0500 en-US text/html https://www.outlookindia.com/international/books-microsoft-ceo-satya-nadella-recommends-for-greater-mind-news-311816 Killexams : Productivity in accounting: Is technology the problem?

Last month, I wrote about the fact that economists are having difficulty finding evidence that the technology advances of the past decade or so have produced any meaningful increase in productivity (see "The productivity problem in accounting"), and from there I pointed out the similar mystery of whether accounting firm staff became more or less productive when they started working remotely en masse due to the COVID pandemic.

That sparked a very thoughtful response from Jason Reis, a partner at Reis & Reis CPAs in Wisconsin, that I'm going to quote at some length, because he raises some very interesting points: "Our firm has both remote and flex employees, and it has worked out well," he wrote. "I have noticed in my 20-plus-year career as a partner that productivity has consistently continued to drop. We used to think it was the employees being less productive, but now we think differently about it."

"Our own impression is that as technology has evolved, the complexity of our regulatory environment has evolved with it, and it rapidly changes now more than ever," he continued. "We do more trainings and research now than we ever have with staff. That also includes the never-ending landscape of technology options, which at times can be distractions."

All businesses, we're told, are technology businesses these days, but what that often seems to mean is we're all spending a great deal of our time on technology, and not necessarily on the business. The amount of time we all spend on email is a perfect example: Email speeds up our work processes enormously — but how much of the time saved is given over to managing our overflowing inboxes?

Writ large, this question is valid for all technologies: How much time do we end up devoting to them, or to the other requirements they engender, rather than to the work they're meant to enable? (And that's without counting the time when we're rendered unproductive because they're out of order, or offline.)

This is not to suggest that technology is a net negative, or that it hasn't played a critical role in firms being able to push through an ever-increasing amount of work in the face of an ever-diminishing workforce. It has done all that; but, has the ability to do more — and more complex —things, simply led us to pile on more and more of them? In other words, is technology eating up its own productivity gains?

Economists are uncertain about this very issue, and I'm no economist, so I'm even more uncertain — and I'm curious what you think. How much more productive has technology made you? And what kind of return are you getting on all the time and energy and money you spend on managing all the technology in your firm? Let me know at daniel.hood@arizent.com.

Wed, 23 Aug 2023 08:48:00 -0500 en text/html https://www.accountingtoday.com/opinion/productivity-in-accounting-is-technology-the-problem
Killexams : The rise of the tech ethics congregation

Just before Christmas last year, a pastor preached a gospel of morals over money to several hundred members of his flock. Wearing a sport coat, angular glasses, and wired earbuds, he spoke animatedly into his laptop from his tiny glass office inside a co-working space, surrounded by six whiteboards filled with his feverish brainstorming.

Sharing a scriptural parable familiar to many in his online audience—a group assembled from across 48 countries, many in the Global South—he explained why his congregation was undergoing dramatic growth in an age when the life of the spirit often struggles to compete with cold, hard, capitalism.

“People have different sources of motivation [for getting involved in a community],” he sermonized. “It’s not only money. People actually have a deeper purpose in life.” 

Many of the thousands of people who’d been joining his community were taking the time and energy to do so “because they care about the human condition, and they care about the future of our democracy,” he argued. “That is not academic,” he continued. “That is not theoretical. That is talking about future generations, that’s talking about your happiness, that’s talking about how you see the world. This is big … a paradigm shift.”

The leader in question was not an ordained minister, nor even a religious man. His increasingly popular community is not—technically—a church, synagogue, or temple. And the scripture he referenced wasn’t from the Bible. It was Microsoft Encarta vs. Wikipedia—the story of how a movement of self-­motivated volunteers defeated an army of corporate-funded professionals in a crusade to provide information, back in the bygone days of 2009. “If you’re young,” said the preacher, named David Ryan Polgar, “you’ll need to google it.”

Polgar, 44, is the founder of All Tech Is Human, a nonprofit organization devoted to promoting ethics and responsibility in tech. Founded in 2018, ATIH is based in Manhattan but hosts a growing range of in-person programming—social mixers, mentoring opportunities, career fairs, and job-seeking resources—in several other cities across the US and beyond, reaching thousands. Such numbers would delight most churches. 

""
David Polgar, the founder of All Tech Is Human, on stage at a recent Responsible Tech Mixer event in New York City.

COURTESY OF ALL TECH IS HUMAN

Like other kinds of congregations, ATIH focuses on relationship-­building: the staff invests much of its time, for example, in activities like curating its “Responsible Tech Organization” list, which names over 500 companies in which community members can get involved, and growing its responsible-tech talent pool, a list of nearly 1,400 individuals interested in careers in the field. Such programs, ATIH says, bring together many excellent but often disconnected initiatives, all in line with the ATIH mission “to tackle wicked tech & society issues and co-create a tech future aligned with the public interest.”

The organization itself doesn’t often get explicitly political with op-eds or policy advocacy. Rather, All Tech Is Human’s underlying strategy is to quickly expand the “responsible-tech ecosystem.” In other words, its leaders believe there are large numbers of individuals in and around the technology world, often from marginalized backgrounds, who wish tech focused less on profits and more on being a force for ethics and justice. These people will be a powerful force, Polgar believes, if—as the counterculture icon Timothy Leary famously exhorted—they can “find the others.” If that sounds like reluctance to take sides on hot-button issues in tech policy, or to push for change directly, Polgar calls it an “agnostic” business model. And such a model has real strengths, including the ability to bring tech culture’s opposing tribes together under one big tent. 

But as we’ll see, attempts to stay above the fray can cause more problems than they solve.

Meanwhile, All Tech Is Human is growing so fast, with over 5,000 members on its Slack channel as of this writing, that if it were a church, it would soon deserve the prefix “mega.” The group has also consistently impressed me with its inclusiveness: the volunteer and professional leadership of women and people of color is a point of major emphasis, and speaker lineups are among the most heterogeneous I’ve seen in any tech-related endeavor. Crowds, too, are full of young professionals from diverse backgrounds who participate in programs out of passion and curiosity, not hope of financial gain. Well, at least attendees don’t go to ATIH for direct financial gain; as is true with many successful religious congregations, the organization serves as an intentional incubator for professional networking. 

Still, having interviewed several dozen attendees, I’m convinced that many are hungry for communal support as they navigate a world in which tech has become a transcendent force, for better or worse.

Growth has brought things to a turning point. ATIH now stands to receive millions of dollars—including funds from large foundations and tech philanthropist demigods who once ignored it. And Polgar now finds himself in a networking stratosphere with people like Canadian prime minister Justin Trudeau, among other prominent politicos. Will the once-humble community remain dedicated to centering people on the margins of tech culture? Or will monied interests make it harder to fight for the people Christian theologians might call “the least of these”?

Techno-solutionism and related ideas can function as a kind of theology, justifying harm in the here and now with the promise of a sweet technological hereafter.

I first started looking into ATIH in late 2021, while researching my forthcoming book Tech Agnostic: How Technology Became the World’s Most Powerful Religion, and Why It Desperately Needs a Reformation (MIT Press, 2024). The book project began because I’d been coming across a striking number of similarities between modern technological culture and religion, and the parallels felt important, given my background. I am a longtime (nonreligious) chaplain at both Harvard and MIT. After two decades immersed in the world of faith, back in 2018 I gave up on what had been my dream: to build a nonprofit “godless congregation” for the growing population of atheists, agnostics, and the religiously unaffiliated. Having started that work just before social media mavens like Mark Zuckerberg began to speak of “connecting the world,” I ultimately lost faith in the notion of building community around either religion or secularism when I realized that technology had overtaken both.

Indeed, tech seems to be the dominant force in our economy, politics, and culture, not to mention a daily obsession that can increasingly look like an addiction from which some might plausibly seek the help of a higher power to recover. Tech culture has long been known for its prophets (Jobs, Gates, Musk, et al.), and tech as a whole is even increasingly oriented around moral and ethical messages, such as Google’s infamous “Don’t be evil.” 

The tech-as-religion comparison I’ve found myself drawing is often unflattering to tech leaders and institutions. Techno-solutionism and related ideas can function as a kind of theology, justifying harm in the here and now with the promise of a sweet technological hereafter; powerful CEOs and investors can form the center of a kind of priestly hierarchy, if not an outright caste system; high-tech weapons and surveillance systems seem to threaten an apocalypse of biblical proportions. 

When I discovered ATIH, I was pleasantly surprised to find a potentially positiveexample of the sort of dynamic I was describing. I am the sort of atheist who admits that certain features of religion can offer people real benefits. And ATIH seemed to be succeeding precisely because it genuinely operated like a secular, tech-­ethics-focused version of a religious congregation. “It does work that way,” Polgar acknowledged in February 2022, in the first of our several conversations on the topic. Since then, I’ve continued to admire ATIH’s communal and ethical spirit, while wondering whether communities devoted explicitly to tech ethics might just help bring about a reformation that saves tech from itself.

Along with admiration, I’ve also sought to determine whether ATIH is worthy of our faith.

Why a congregation?

I discovered ATIH’s events in late 2021, first through the online Responsible Tech University Summit, a day-long program dedicated to exploring the intersections of tech ethics and campus life. (One of ATIH’s signature programs is its Responsible Tech University Network, which involves, among other things, a growing group of over 80 student “university ambassadors” who represent the organization on their campuses.) All the organization’s programs are organized around typical tech ethics themes, like “the business case for AI ethics,” but participants attend as much for the community as for the course at hand. 

Sarah Husain, who’d worked on Twitter’s Trust and Safety team until it was eliminated by Elon Musk, told me at a May 2022 event that several colleagues in her field had spoken highly of ATIH, recommending she attend. Chana Deitsch, an undergraduate business student who participates in ATIH’s mentorship program, says it not only helps with job leads and reference letters but provides a sense of confidence and belonging. Alex Sarkissian, formerly a Deloitte consultant and now a Buddhist chaplaincy student, feels that the organization has potential “to be a kind of spiritual community for me in addition to my sangha [Buddhist congregation].”

I’ve encountered mainly earnest and insightful members like these, people who come together for serious mutual support and ethical reflection and—non-trivially—funaround a cause I’ve come to hold dear. Granted, few ATIH participants, in my observation, hold C-level tech positions, which could undermine the organization’s claims that it has the ability to unite stakeholders toward effectual action … or perhaps it simply signifies a populism that could eventually put sympathizers in high places?

Despite my skepticism toward both theology and technology, ATIH has often given me the feeling that I’ve found my own tech tribe.

Growing pains

Polgar is a nerdily charismatic former lawyer who has been developing the ideas and networks from which the organization sprouted for over a decade. As a young professor of business law at a couple of small, under-resourced colleges in Connecticut in the early 2010s, he began pondering the ethics of technologies that had recently emerged as dominant and ubiquitous forces across society and culture. Adopting the title “tech ethicist,” he began to write a series of missives on digital health and the idea of “co-creating a better tech future.” His 2017 Medium post “All Tech Is Human,” about how technology design should be informed by more than robotic rationality or utility, generated enthusiastic response and led to the formal founding of the organization a year later.

The ATIH concept took a while to catch on, Polgar told me. He worked unpaid for three years and came “close to quitting.” But his background inspired perseverance. Born in 1979 in Cooperstown, New York, Polgar was a philosophical kid who admired Nikola Tesla and wanted to be an inventor. “Why can’t I start something big,” he remembers thinking back then, “even from a little place like this?”

Despite their growing influence, Polgar and the organization continue to emphasize their outsider status. ATIH, he argues, is building its following in significant part with people who, for their interest in ethical approaches to technology, feel as unjustly ignored as he and many of his upstate peers felt in the shadow of New York City.

ATIH’s model, says the organization’s head of partnerships, Sandra Khalil, is to offer not a “sage on the stage” but, rather, a “guide on the side.” Khalil, a veteran of the US Departments of State and Homeland Security, also came to the organization with an outsider’s pugnacity, feeling “severely underutilized” in previous roles as a non-lawyer intent on “challenging the status quo.”

Polgar, however, hardly shrinks from opportunities to influence tech discourse, whether through media interviews with outlets like the BBC World News or by joining advisory boards like TikTok’s content advisory council. ATIH admits, in its “Ten Principles,” that it draws both from grassroots models, which it says “have ideas but often lack power,” and from “top-down” ones, which can “lack a diversity of ideas” but “have power.” The organization does not ask for or accept membership fees from participants, relying instead on major donations solicited by Polgar and his team, who control decision-making. There hasn’t seemed to be a significant call for more democracy—yet.

The founder as a god?

Part of why I’m insisting ATIH is a congregation is that the group assembled around Polgar demonstrates a religious zeal for organizing and relationship-building as tools for advancing positive moral values. Case in point: Rebekah Tweed, ATIH’s associate director, once worked in an actual church, as a youth pastor; now she applies a skill set my field calls “pastoral care” to creating mutually supportive space for ethically minded techies.

In 2020, Tweed volunteered on ATIH’s first major public project, the Responsible Tech Guide, a crowdsourced document that highlighted the hundreds of people and institutions working in the field. After she formally joined the organization, it landed its first big-time donation: $300,000 over two years from the Ford Foundation, to pay her salary as well as Polgar’s. They were its first full-time employees. 

Polgar was repeatedly rebuffed in early attempts to recruit large gifts, but of late, the growing ATIH team has received significant support from sources including Melinda French Gates’s Pivotal Ventures and about half a million dollars each from Schmidt Futures (the philanthropic fund of former Google CEO Eric Schmidt) and the Patrick J. McGovern Foundation (yet another tech billionaire’s fortune).

Can an organization that serves a truly inclusive audience afford to get in bed with Fortune 500 companies and/or multibillionaires who will inevitably be motivated by a desire to seem ethical?

The question is: Can an organization that serves a truly inclusive audience, emphasizing humanity and ethics in its own name, afford to get in bed with Fortune 500 companies like Google and Microsoft and/or multibillionaires who will inevitably be motivated by a desire to seem ethical and responsible, even when they decidedly are not? Or rather, can it afford not to do so, when growth means the organization’s staff can grow (and earn a living wage)? And could such tensions someday cause a full-blown schism in the ATIH community? 

The potential challenges first came to light for me at a May 2022 summit in New York. For the first time in several large ATIH events I had personally observed, the meeting featured an invited speaker employed by one of the world’s largest tech companies: Harsha Bhatlapenumarthy, a governance manager at Meta and also a volunteer leader in a professional association called Trust and Safety. 

Sandra Khalil with three other participants seated beside her
Not a “sage on the stage” but a “guide on the side”: ATIH head of partnerships Sandra Khalil moderates an event in London.

LIZ ISLES/ALL TECH IS HUMAN

Bhatlapenumarthy—whose panel was called “Tech Policy & Social Media: Where are we headed?”—avoided addressing any of her employer’s recent controversies. Instead of offering any meaningful comment in response to Meta’s troubles over its handling of things from pro-anorexia content to election misinformation, she spoke only vaguely about its ethical responsibilities. The company, she said, was focused on “setting the content moderator up for success.” Which is an interesting way to describe a situation in which Meta had, for example, recently been sued for union busting and human trafficking by content moderators in Kenya.

Several attendees were taken aback that Bhatlapenumarthy’s advocacy for her powerful employer went essentially unchallenged during the panel. Among them was Yael Eisenstat, Facebook’s former global head of election integrity operations for political advertising and the summit’s closing speaker. In a fireside chat immediately following the panel in which Bhatlapenumarthy participated, Eisenstat, who’d been a whistleblower against her former employer, eloquently dismissed Bhatlapenumarthy’s non-remarks. “I believe [Meta] doesn’t want this on their platform,” she said, referring to violent and deceptive content, “but they will not touch their business model.” Eisenstat added that she would feel “more encouraged” if companies would stop “holding up the founder as a god.” 

Eisenstat added to me later, by private message, that “sending a more junior-level employee to speak one-directionally about Meta’s vision of responsible tech is somewhat disingenuous.” In inviting such a speaker, couldn’t ATIH reasonably be understood to be implicated in the offense?

If Bhatlapenumarthy’s presence as a seeming mouthpiece for Big Tech talking points had been an isolated incident, I might have ignored it. But a few months later, I found myself wondering if a concerning pattern was emerging.

Digital Sunday school

In September 2022, I attended Building a Better Tech Future for Children, an ATIH event cohosted with the Joan Ganz Cooney Center at Sesame Workshop, a nonprofit research and innovation lab associated with the legendary children’s TV show Sesame Street. This struck me as a shrewd partnership for ATIH: every congregation needs a Sunday school. A community organization aspiring to the advancement of humanity and the betterment of the world will inevitably turn its thoughts to educating the next generation according to its values. 

After a keynote from Elizabeth Milovidov, senior manager for digital child safety at the Lego Group, on designing digital experiences with children’s well-being in mind came a panel featuring speakers from influential players such as the Omidyar Network and TikTok, as well as young activists. The group discussed the risks and harms facing young people online, and the general tone was optimistic that various efforts to protect them would be successful, particularly if built upon one another. “Digital spaces can be a positive source in the lives of young people,” said the moderator, Mina Aslan.

Also on the panel was Harvard Medical School professor Michael Rich, a self-proclaimed “mediatrician”—a portmanteau of “media’’ and “pediatrician.” Rich made good points—for example, stressing the importance of asking kids what they’re hoping for from tech, not just talking about the risks they confront. But one comment triggered my spider-sense: when he said that today’s tech is like his generation’s cigarettes, in that you can’t just tell kids “Don’t do it.” 

The analogy between tobacco and social media is at best a bizarre one to draw. Millions of young people became smokers not just through peer pressure, but because for decades, Big Tobacco’s whole business model was built on undue corporate influence and even outright lying, including paying influential doctors and scientists to downplay the death they dealt. Surely ATIH’s leadership would want to avoid any hint that such practices would be acceptable in tech?

Tobacco eventually became among the most heavily regulated industries in history, with results including, famously, the US surgeon general’s warnings on tobacco ads and packages. Now the current surgeon general, Vivek Murthy, has warned there is “growing evidence” that social media is “associated with harm to young people’s mental health.” But on the panel (and in his commentary elsewhere), Rich only briefly acknowledged such potential harms, forgoing talk of regulating social media for the idea of cultivating “resilience” in the industry’s millions of young customers.

To be clear, I agree with Rich that it is a losing strategy to expect young people to completely abstain from social media. But I fear that tech and our broader society alike are not taking nearly enough ethical responsibility for protecting children from what can be powerful engines of harm. And I was disappointed to see Rich’s relatively sanguine views not only expressed but centered at an ATIH meeting.

How much responsibility?

How much responsibility should a “responsible tech” organization like ATIH take—or not—for inviting speakers with corporate ties, especially when it is not fully open with its audience about such ties? How obligated is ATIH to publicly interrogate the conclusions of such speakers? 

Rich’s response to questions I’d asked after his panel was, essentially, that parents ought to channel their energies into making “better choices” around tech, which—conveniently for some of the doctor’s corporate sponsors—lays the responsibility for children’s safety on the parents instead of the tech industry. His lab, I later learned, raised nearly $6 million in 2022, at least partly through grants from Meta, TikTok, and Amazon. When TikTok CEO Shou Chew testified before the US Congress in March 2023, he cited Rich’s lab—and only Rich’s lab—as an example of how TikTok used science and medicine to protect minors. Does this represent a conflict of interest—and therefore a serious ethical failing on the part of both Rich and ATIH for platforming him? I don’t know. I do worry, though, that there’s something inhumane in Rich’s emphasis on building kids’ “resilience” rather than interrogating why they should have to be so resilient against tech in the first place.

What kind of institution does ATIH want to be? One that pushes back against the powerful, or one that upholds a corporate-friendly version of diversity, allowing its wealthy sponsors to remain comfortable at (almost) all times? As the Gospel of Matthew says, no man (or organization of “humans”) can serve two masters.

Asking around ATIH’s network about my concerns, I found ambivalence. “I do believe it is possible to do research sponsored by companies ethically,” said Justin Hendrix, an occasional ATIH participant and editor of Tech Policy Press, a wonky journal in which academics and others tend to critique established tech narratives. “But it is right to scrutinize it for signs of impropriety.”

“I see your concern,” Polgar later told me when I asked him about my apprehensions. Raising his brow with a look of surprise when I wondered aloud whether Rich’s funding sources might have affected the commentary he offered for ATIH’s audience, Polgar made clear he did not agree with all the doctor’s views. He also admitted it is his “worst fear” that his organization might be co-opted by funding opportunities that make it harder “to be a speaker of truth.”

“Don’t become a parody of yourself,” he said, seeming to turn the focus of his homily inward.

Team human

Several months after the Sesame Workshop event, I attended a crowded mixer at ATIH’s now-regular monthly venue, the Midtown Manhattan offices of the VC firm Betaworks, with a very different kind of speaker: the tech critic Douglas Rushkoff, a freethinker who has often spoken of the need for a kind of secular faith in our common humanity in the face of tech capitalism’s quasi-religious extremism. Polgar is a longtime admirer of his work.

“All tech bros are human,” Rushkoff cracked, launching into an enthusiastically received talk. Fresh off a publicity tour for a book about tech billionaires buying luxury bunkers to escape a potential doomsday of their own making, Rushkoff provided a starkly antiauthoritarian contrast to the speakers I’d taken issue with at the earlier events.

Ultimately, I don’t know whether ATIH will succeed in its attempts to serve what Rushkoff would call “team human” rather than becoming an accessory to the overwhelming wealth tech can generate by seeming to make humanity commodifiable and, ultimately, redundant. I do, however, continue to believe that building a more humane tech future will require communal support, because none of us can do it alone. 

I chose the theme of tech agnosticism for my book in part because I am often reminded that I truly don’t know—and neither do you—when or where tech’s enormous powers might actually do the good they purport to do. But I suspect we’re going to need a lot more of what Neil Postman’s 1992 book Technopoly, an early exploration of the theme of tech-as-­religion and a precursor to the techlash, called “loving resistance fighters.” While I lack prophetic abilities to know whether Polgar and co. will help spark such a resistance, the potential is genuinely there. In a participatory congregation, one can always worry about co-­option, as even Polgar himself admits he does; but isn’t it also the responsibility of each of us to actively help keep our communities accountable to their own ethical values?

Let’s maintain our skepticism, while hoping the ethical tech congregation gives us continued reason to keep the faith. 

Greg M. Epstein serves as the humanist chaplain at Harvard University and MIT and as the convener for ethical life at MIT’s Office of Religious, Spiritual, and Ethical Life.

Mon, 14 Aug 2023 20:59:00 -0500 en text/html https://www.technologyreview.com/2023/08/15/1077369/tech-ethics-congregation/
Killexams : Is Bing too belligerent? Microsoft looks to tame AI chatbot – The Associated Press No result found, try new keyword!The Microsoft Bing logo and the website’s page are shown in this photo taken in New York on Tuesday, Feb. 7, 2023. Microsoft is promising to make improvements to its new artificially intelligent Bing ... Tue, 22 Aug 2023 22:49:00 -0500 text/html https://www.inferse.com/688773/is-bing-too-belligerent-microsoft-looks-to-tame-ai-chatbot-the-associated-press/ Killexams : This Company Is Quickly Becoming a Leader in AI -- and Its Not Nvidia, Alphabet, or Microsoft No result found, try new keyword!When it comes to artificial intelligence (AI), companies such as Alphabet, Microsoft, and Nvidia seem to have become analogous to the buzzword. While it may appear that big tech has a stronghold on ... Tue, 15 Aug 2023 02:16:00 -0500 text/html https://www.nasdaq.com/articles/this-company-is-quickly-becoming-a-leader-in-ai-and-its-not-nvidia-alphabet-or-microsoft
AZ-500 exam dump and training guide direct download
Training Exams List