EXAM NUMBER : AZ-303
EXAM NAME : Microsoft Azure Architect Technologies
Candidates for this test should have subject matter expertise in designing and implementing solutions that run on Microsoft Azure, including aspects like compute, network, storage, and security. Candidates should have intermediate-level skills for administering Azure. Candidates should understand Azure development and DevOps processes.
Responsibilities for an Azure Solution Architect include advising stakeholders and translating business requirements into secure, scalable, and reliable cloud solutions.
An Azure Solution Architect partners with cloud administrators, cloud DBAs, and clients to implement solutions.
A candidate for this test should have advanced experience and knowledge of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data platform, budgeting, and governance–this role should manage how decisions in each area affect an overall solution. In addition, this role should have expert-level skills in Azure administration and have experience with Azure development and DevOps processes.
- Implement and monitor an Azure infrastructure (50-55%)
- Implement management and security solutions (25-30%)
- Implement solutions for apps (10-15%)
- Implement and manage data platforms (10-15%)
Implement and Monitor an Azure Infrastructure (50-55%)
Implement cloud infrastructure monitoring
monitor security
monitor performance
monitor health and availability
monitor cost
configure advanced logging
configure logging for workloads initiate automated responses by using Action Groups
configure and manage advanced alerts Implement storage accounts
select storage account options based on a use case
configure Azure Files and blob storage
configure network access to the storage account
implement Shared Access Signatures and access policies
implement Azure AD authentication for storage
manage access keys
implement Azure storage replication
implement Azure storage account failover
Implement VMs for Windows and Linux
configure High Availability
configure storage for VMs
select virtual machine size
implement Azure Dedicated Hosts
deploy and configure scale sets
configure Azure Disk Encryption
Automate deployment and configuration of resources
save a deployment as an Azure Resource Manager template
modify Azure Resource Manager template
evaluate location of new resources
configure a virtual disk template
deploy from a template
manage a template library
create and execute an automation runbook
Implement virtual networking
implement VNet to VNet connections
implement VNet peering
Implement Azure Active Directory
add custom domains
configure Azure AD Identity Protection
implement self-service password reset
implement Conditional Access including MFA
configure user accounts for MFA
configure fraud alerts
configure bypass options
configure Trusted IPs
configure verification methods
implement and manage guest accounts
manage multiple directories
Implement and manage hybrid identities
install and configure Azure AD Connect
identity synchronization options
configure and manage password sync and password writeback
configure single sign-on
use Azure AD Connect Health
Implement Management and Security Solutions (25-30%)
Manage workloads in Azure
migrate workloads using Azure Migrate
implement Azure Backup for VMs
implement disaster recovery
implement Azure Update Management
Implement load balancing and network security
implement Azure Load Balancer
implement an application gateway
implement a Web Application Firewall
implement Azure Firewall
implement Azure Firewall Manager
implement the Azure Front Door Service
implement Azure Traffic Manager
implement Network Security Groups and Application Security Groups
implement Bastion
Implement and manage Azure governance solutions
create and manage hierarchical structure that contains management groups,subscriptions and resource groups
assign RBAC roles
create a custom RBAC role
configure access to Azure resources by assigning roles
configure management access to Azure
interpret effective permissions
set up and perform an access review
implement and configure an Azure Policy
implement and configure an Azure Blueprint
Manage security for applications
implement and configure KeyVault
implement and configure Managed Identities
register and manage applications in Azure AD
Implement Solutions for Apps (10-15%)
Implement an application infrastructure
create and configure Azure App Service
create an App Service Web App for Containers
create and configure an App Service plan
configure an App Service
configure networking for an App Service
create and manage deployment slots
implement Logic Apps
implement Azure Functions
Implement container-based applications
create a container image
configure Azure Kubernetes Service
publish and automate image deployment to the Azure Container Registry
publish a solution on an Azure Container Instance
Implement and Manage Data Platforms (10-15%)
Implement NoSQL databases
configure storage account tables
select appropriate CosmosDB APIs
set up replicas in CosmosDB
Implement Azure SQL databases
configure Azure SQL database settings
implement Azure SQL Database managed instances
configure HA for an Azure SQL database
publish an Azure SQL database
Microsoft Azure Architect Technologies Microsoft Technologies test prep
We are notified that an imperative issue in the IT business is that there is unavailability of significant worth prep materials. Our test preparation material gives all of you that you should take an certification exam. Our AZ-303 AZ-303 test will give you test question with confirmed answers that reflect the genuine exam. We at killexams.com are made plans to empower you to pass your AZ-303 test with high scores.
AZ-303 Dumps
AZ-303 Braindumps
AZ-303 Real Questions
AZ-303 Practice Test
AZ-303 dumps free
Microsoft
AZ-303
Microsoft Azure Architect Technologies
http://killexams.com/pass4sure/exam-detail/AZ-303 Question: 334
HOTSPOT
Your company hosts multiple websites by using Azure virtual machine scale sets (VMSS) that run Internet Information
Server (IIS).
All network communications must be secured by using end to end Secure Socket Layer (SSL) encryption. User
sessions must be routed to the same server by using cookie-based session affinity.
The image shown depicts the network traffic flow for the websites to the VMSS.
Use the drop-down menus to select the answer choice that answers each question. NOTE: Each correct selection is
worth one point. Answer:
Explanation:
Box 1: Azure Application Gateway
You can create an application gateway with URL path-based redirection using Azure PowerShell.
Box 2: Path-based redirection and Websockets
Reference: https://docs.microsoft.com/bs-latn-ba/azure//application-gateway/tutorial-url-redirect-powershell Question: 335
HOTSPOT
You have an Azure subscription that contains multiple resource groups.
You create an availability set as shown in the following exhibit.
You deploy 10 virtual machines to AS1.
Use the drop-down menus to select the answer choice that completes each statement based on the information
presented in the graphic. NOTE: Each correct selection is worth one point. Answer:
Explanation:
Box 1: 6
Two out of three update domains would be available, each with at least 3 VMs.
An update domain is a group of VMs and underlying physical hardware that can be rebooted at the same time.
As you create VMs within an availability set, the Azure platform automatically distributes your VMs across these
update domains. This approach ensures that at least one instance of your application always remains running as the
Azure platform undergoes periodic maintenance.
Box 2: the West Europe region and the RG1 resource group
Reference: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/regions Question: 336
You have an Azure subscription that contains 100 virtual machines. You have a set of Pester tests in PowerShell that
validate the virtual machine environment. You need to run the tests whenever there is an operating system update on
the virtual machines. The solution must minimize implementation time and recurring costs.
Which three resources should you use to implement the tests? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A . Azure Automation runbook
B . an alert rule
C . an Azure Monitor query
D . a virtual machine that has network access to the 100 virtual machines
E . an alert action group Answer: ABE
Explanation:
AE: You can call Azure Automation runbooks by using action groups or by using classic alerts to automate tasks based
on alerts.
B: Alerts are one of the key features of Azure Monitor. They allow us to alert on actions within an Azure subscription
Reference:
https://docs.microsoft.com/en-us/azure/automation/automation-create-alert-triggered-runbook
https://techsnips.io/snips/how-to-create-and-test-azure-monitor-alerts/?page=13 Question: 337
HOTSPOT
You have an Azure subscription that contains the resource groups shown in the following table.
You create an Azure Resource Manager template named Template1 as shown in the following exhibit.
From the Azure portal, you deploy Template1 four times by using the settings shown in the following table.
What is the result of the deployment? To answer, select the appropriate options in the answer area. NOTE: Each
correct selection is worth one point. Answer: Question: 338
Question Set 1
You have an Azure subscription that contains 10 virtual machines on a virtual network. You need to create a graph
visualization to display the traffic flow between the virtual machines.
What should you do from Azure Monitor?
A . From Activity log, use quick insights.
B . From Metrics, create a chart.
C . From Logs, create a new query.
D . From Workbooks, create a workbook. Answer: C
Explanation:
Navigate to Azure Monitor and select Logs to begin querying the data
Reference:
https://azure.microsoft.com/en-us/blog/analysis-of-network-connection-data-with-azure-monitor-for-virtualmachines/ Question: 339
HOTSPOT
You have an Azure Active Directory (Azure AD) tenant named contoso.com.
The tenant contains the users shown in the following table.
The tenant contains computers that run Windows 10.
The computers are configured as shown in the following table.
You enable Enterprise State Roaming in contoso.com for Group1 and Group
A . For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct
selection is worth one point. Answer:
Explanation:
Enterprise State Roaming provides users with a unified experience across their Windows devices and reduces the time
needed for configuring a new device.
Box 1: Yes
Box 2: No
Box 3: Yes
Reference: https://docs.microsoft.com/en-us/azure//////active-directory/devices/enterprise-state-roaming-overview Question: 340
HOTSPOT
You plan to deploy an Azure virtual machine named VM1 by using an Azure Resource Manager template. You need
to complete the template.
What should you include in the template? To answer, select the appropriate options in the answer area. NOTE: Each
correct selection is worth one point. Answer:
Explanation:
Within your template, the dependsOn element enables you to define one resource as a dependent on one or more
resources. Its value can be a comma-separated list of resource names.
Box 1: Microsoft.Network/networkInterfaces
This resource is a virtual machine. It depends on two other resources:
Microsoft.Storage/storageAccounts
Microsoft.Network/networkInterfaces
Box 2: Microsoft.Network/virtualNetworks/
The dependsOn element enables you to define one resource as a dependent on one or more resources. The resource
depends on two other resources:
Microsoft.Network/publicIPAddresses
Microsoft.Network/virtualNetworks
Reference: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-tutorial-create-
templates-with-dependent-resources Question: 341
You have an Azure subscription.
You have 100 Azure virtual machines.
You need to quickly identify underutilized virtual machines that can have their service tier changed to a less expensive
offering.
Which blade should you use?
A . Metrics
B . Customer sights
C . Monitor
D . Advisor Answer: D
Explanation:
Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized
resources. You can get cost recommendations from the Cost tab on the Advisor dashboard.
Reference: https://docs.microsoft.com/en-us/azure/advisor/advisor-cost-recommendations Question: 342
You have an Azure subscription that contains an Azure Log Analytics workspace.
You have a resource group that contains 100 virtual machines. The virtual machines run Linux.
You need to collect events from the virtual machines to the Log Analytics workspace.
Which type of data source should you configure in the workspace?
A . Syslog
B . Linux performance counters
C . custom fields Answer: A
Explanation:
Syslog is an event logging protocol that is common to Linux. Applications will send messages that may be stored on
the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures
the local Syslog daemon to forward messages to the agent. The agent then sends the message to Azure Monitor where
a corresponding record is created.
Reference: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/data-sources-custom-logs Question: 343
HOTSPOT
Your network contains an Active Directory domain named adatum.com and an Azure Active Directory (Azure AD)
tenant named adatum.onmicrosoft.com.
Adatum.com contains the user accounts in the following table.
Adatum.onmicrosoft.com contains the user accounts in the following table.
You need to implement Azure AD Connect. The solution must follow the principle of least privilege.
Which user accounts should you use in Adatum.com and Adatum.onmicrosoft.com to implement Azure AD Connect?
To answer select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer:
Explanation:
Box 1: User5
In Express settings, the installation wizard asks for the following:
AD DS Enterprise Administrator credentials
Azure AD Global Administrator credentials
The AD DS Enterprise Admin account is used to configure your on-premises Active Directory. These credentials are
only used during the installation and are not used after the installation has completed. The Enterprise Admin, not the
Domain Admin should make sure the permissions in Active Directory can be set in all domains.
Box 2: UserA
Azure AD Global Admin credentials are only used during the installation and are not used after the installation has
completed. It is used to create the Azure AD Connector account used for synchronizing changes to Azure AD. The
account also enables sync as a feature in Azure AD.
Reference: https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-accounts-
permissions Question: 344
HOTSPOT
You plan to create an Azure Storage account in the Azure region of East US 2.
You need to create a storage account that meets the following requirements:
Replicates synchronously
Remains available if a single data center in the region fails
How should you configure the storage account? To answer, select the appropriate options in the answer area. NOTE:
Each correct selection is worth one point. Answer:
Explanation:
Box 1: Zone-redundant storage (ZRS)
Zone-redundant storage (ZRS) replicates your data synchronously across three storage clusters in a single region.
LRS would not remain available if a data center in the region fails
GRS and RA GRS use asynchronous replication.
Box 2: StorageV2 (general purpose V2)
ZRS only support GPv2.
Reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-zrs Question: 345
You have a virtual network named VNet1 as shown in the exhibit. (Click the Exhibit tab.)
No devices are connected to VNet1.
You plan to peer VNet1 to another virtual network named VNet2. VNet2 has an address space of 10.2.0.0/16.
You need to create the peering.
What should you do first?
A . Configure a service endpoint on VNet2.
B . Add a gateway subnet to VNet1.
C . Create a subnet on VNEt1 and VNet2.
D . Modify the address space of VNet1. Answer: D
Explanation:
The virtual networks you peer must have non-overlapping IP address spaces. The exhibit indicates that VNet1 has an
address space of 10.2.0.0/16, which is the same as VNet2, and thus overlaps. We need to change the address space for
VNet1.
Reference: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-manage-peering#requirements-and-
constraints Question: 346
HOTSPOT
You have an Azure Resource Manager template named Template1 in the library as shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information
presented in the graphic. NOTE: Each correct selection is worth one point. Answer:
Explanation:
Reference: https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-syntax Question: 347
DRAG DROP
You have an Azure subscription that contains two virtual networks named VNet1 and VNet2. Virtual machines
connect to the virtual networks.
The virtual networks have the address spaces and the subnets configured as shown in the following table.
You need to add the address space of 10.33.0.0/16 to VNet1. The solution must ensure that the hosts on VNet1 and
VNet2 can communicate.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions
to the answer area and arrange them in the correct order. Answer:
Explanation:
Step 1: Remove peering between Vnet1 and VNet2.
You cant add address ranges to, or delete address ranges from a virtual networks address space once a virtual
network is peered with another virtual network. To add or remove address ranges, delete the peering, add or remove
the address ranges, then re-create the peering.
Step 2: Add the 10.44.0.0/16 address space to VNet1.
Step 3: Recreate peering between VNet1 and VNet2
Reference: https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-manage-peering Question: 348
HOTSPOT
You have an Azure Resource Manager template for a virtual machine named Template1.
Template1 has the following parameters section.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct
selection is worth one point. Answer:
Explanation:
Box 1: Yes
The Resource group is not specified.
Box 2: No
The default value for the operating system is Windows 2016 Datacenter.
Box 3: Yes
Location is no default value.
Reference:
https://docs.microsoft.com/bs-latn-ba/azure/virtual-machines/windows/ps-template
For More exams visit https://killexams.com/vendors-exam-list
Kill your test at First Attempt....Guaranteed!
Microsoft Technologies test prep - BingNews
https://killexams.com/pass4sure/exam-detail/AZ-303
Search resultsMicrosoft Technologies test prep - BingNews
https://killexams.com/pass4sure/exam-detail/AZ-303
https://killexams.com/exam_list/MicrosoftMicrosoft won over Washington. A new AI debate tests its president.
In 2017, Microsoft president Brad Smith made a bold prediction. Speaking on a panel at the Davos World Economic Forum, he said governments would be talking about how to regulate artificial intelligence in about five years.
Another executive bristled at the idea, telling Smith no one could know the future.
But the prophecy was right. As if on schedule, on Thursday morning Smith convened a group of government officials, members of Congress and influential policy experts for a speech on a debate he’s long been anticipating. Smith unveiled his “blueprint for public governance of AI” at Planet Word, a language arts museum that he called a “poetic” venue for a conversation about AI.
Rapid advances in AI and the surging popularity of chatbots such as ChatGPT have moved lawmakers across the globe to grapple with new AI risks. Microsoft’s $10 billion investment in ChatGPT’s parent company, OpenAI, has thrust Smith firmly into the center of this frenzy.
Smith is drawing on years of preparation for the moment. He has discussed AI ethics with leaders ranging from the Biden administration to the Vatican, where Pope Francis warned Smith to “keep your humanity.” He consulted recently with Sen. Majority Leader Charles E. Schumer, who has been developing a framework to regulate artificial intelligence. Smith shared Microsoft’s AI regulatory proposals with the New York Democrat, who has “pushed him to think harder in some areas,” he said in an interview with The Washington Post.
His policy wisdom is aiding others in the industry, including OpenAI CEO Sam Altman, who consulted with Smith as he prepared policy proposals discussed in his accurate congressional testimony. Altman called Smith a “positive force” willing to provide guidance on short notice — even to naive ideas.
“In the nicest, most patient way possible, he’ll say ‘That’s not the best idea for these reasons,’” Altman said. “‘Here’s 17 better ideas.’”
But it’s unclear whether Smith will be able to sway wary lawmakers amid a flurry of burgeoning efforts to regulate AI — a technology he compares in potential to printing press,but that he says holds cataclysmic risks.
“History would say if you go too far to slow the adoption of the technology you can hold your society back,” said Smith. “If you let technology go forward without any guardrails and you throw responsibility and the rule of law to the wind, you will likely pay a price that’s far in excess of what you want.”
In Thursday’s speech, Smith endorsed creating a new government agency to oversee AI development, and creating “safety brakes” to rein in AI that controls critical infrastructure, including the electrical grid, water system, and city traffic flows.
His call for tighter regulations on a technology that could define his company’s future may appear counterintuitive. But it’s part of Smith’s well-worn playbook, which has bolstered his reputation as the tech industry’s de facto ambassador to Washington.
Other companies appear to be taking notes. In the past month, OpenAI and Google — one of Microsoft’s top competitors — unveiled their own visions for the future of AI regulation.
But Microsoft’s embrace of ChatGPT catapults the 48-year-old company, along with Smith, to the center of a new Washington maelstrom. He’s also facing battles on multiple fronts in the United States and abroad as he tries to close the company’s largest ever acquisition, that of gaming giant Activision Blizzard.
The debate marks a career-defining test of whether Microsoft’s success in Washington can be attributed to Smith’s political acumen — or the company’s distance from the most radioactive tech policy issues.
The proactive calls for regulation are the result of a strategy that Smith first proposed more than two decades ago. When he interviewed for Microsoft’s top legal and policy job in late 2001, he presented a single slide to the executives with one message: It’s time to make peace. (Businessweek, since purchased by Bloomberg, first reported the slide.)
For Microsoft, which had developed a reputation as a corporate bully, the proposition marked a sea change. Once Smith secured the top job, he settled dozens of cases with governments and companies that had charged Microsoft with alleged anticompetitive tactics.
Smith found ways to ingratiate himself with lawmakers as a partner rather than an opponent, using hard-won lessons from Microsoft’s brutal antitrust battles in the 1990s, when the company engaged in drawn out legal battles over accusations it wielded a monopoly in personal computers.
The pivot paid off. Four years ago, as antitrust scrutiny was building of Silicon Valley, Microsoft wasn’t a target. Smith instead served as a critical witness, helping lawmakers build the case that Facebook, Apple, Amazon and Google engaged in anti-competitive, monopoly-style tactics to build their dominance, said Rep. David N. Cicilline (D-R.I.), who served as the chair of the House Judiciary antitrust panel that led the probe.
Smith recognized Microsoft was a “better company, a more innovative company” because of its clashes with Washington, Cicilline said. Smith also proactively adopted some policies lawmakers proposed, which other Silicon Valley companies aggressively lobbied against, he added.
“He provided a lot of wisdom and was a very responsible tech leader, quite different from the leadership at the other companies that were investigated,” Cicilline said.
In particular, Smith has deployed this conciliatory model in areas where Microsoft has far less to lose than its Big Tech competitors.
In 2018, Smith called for policies that would require the government to obtain a warrant to use facial recognition, as competitors such as Amazon aggressively pursued government facial recognition contracts. In 2019, he criticized Facebook for the impact of foreign influence on its platform during the 2016 elections — an issue Microsoft’s business-oriented social network, LinkedIn, largely didn’t confront. He has said that Section 230, a key law that social media companies use as a shield from lawsuits, had outlived its utility.
“Having engaged with executives across a number of sectors over the years, I’ve found Brad to be thoughtful, proactive and honest, particularly in an industry prone to obfuscation,” said Sen. Mark R. Warner (D-Va.).
But as Microsoft finds itself in Washington’s sights for the first time in decades, Smith’s vision is being newly tested. Despite a global charm offensive and a number of concessions intended to promote competition in gaming, both the U.K. competition authority and the Federal Trade Commission in the United States have recently sued to block Microsoft’s $69 billion acquisition of Activision Blizzard.
Smith signaled a new tone the day the FTC decision came down.
“While we believed in giving peace a chance, we have complete confidence in our case and welcome the opportunity to present our case in court,” Smith said in a statement. The company has appealed both the U.K. and FTC decisions. Smith said he continues to look for opportunities where he can find common ground with regulators who opposed the deal.
Threats to peace
When Microsoft was gearing up for regulatory scrutiny of the Activision Blizzard deal, Smith traveled to Washington to talk about how the company was “adapting ahead of regulation.” He announced Microsoft would adopt a series of new rules to boost competition in its app stores and endorsed several legislative proposals that would force other companies to follow suit.
On Thursday, he once again tried stay a step ahead of panic Washington policymakers. Smith delivered Thursday’s address in the style of a a tech company demo day, where executives theatrically unveil new products. There were more than half a dozen lawmakers in the audience, including Rep. Ted Lieu (D-Calif.), who has used his computer science background to position himself as a leading AI policymaker, and Rep. Ken Buck (R-Co.), who co-chaired the antitrust investigation into tech companies with Cicilline.
Smith proposed that the Biden Administration could swiftly promote responsible AI development by passing an executive order requiring companies selling AI software to the government to abide by risk management rules developed by the National Institute of Standards and Technology, a federal laboratory that develops standards for new technology. (Such an order could favor Microsoft in government contracts, as the company promised the White House that it would implement the rules over the summer.)
He also called for regulation that would address multiple levels of the “tech stack,” the layers of technology ranging from data center infrastructure to applications enabling AI models to function. Smith and his Microsoft colleagues have long made education a key part of their policy strategy, and Smith has been focused on educating lawmakers, members of the Biden administration and their staff about how the AI tech stack works in accurate one-on-one meetings, said Natasha Crampton, the company’s chief of Responsible AI, in an interview.
Smith, who has worked at Microsoft for nearly 30 years, said he views AI as the most important policy issue of a career that has spanned policy debates about surveillance, intellectual property, privacy and more.
But he is clear-eyed that more political obstacles lie ahead for Microsoft, saying in an interview that “life is more challenging” in the AI space, as many legislatures around the world simultaneously consider new tech regulations, including on artificial intelligence.
“We’re dealing with questions that don’t yet have answers,” Smith said. “So you have to expect that life is going to be more complicated.”
Thu, 25 May 2023 01:00:28 -0500en-UStext/htmlhttps://www.msn.com/en-us/news/politics/tech-s-ambassador-to-washington-gambles-his-career-on-ai/ar-AA1bGaynRemembering GitHub's Office, a Monument to Tech Culture
It was the spring of 2016, and I was in the Oval Office, waiting to interview for a job. Only I wasn’t in Washington, DC. I was at the headquarters of GitHub, a code hosting platform, in San Francisco, sitting inside a perfect, full-size replica of the office of the president of the United States.
A woman arrived to retrieve me. Shaking my hand, she explained that the Oval Office was being dismantled and replaced with a café for employees. We're trying to make things a little more practical, she said, with a shrug and a barely detectable roll of her eyes.
“But but but—” I sputtered silently in my head, eyes careening left and right. “It’s the Oval Office!” Who cares about practicality! It was like I’d been told they were razing Disney World to make room for more condominiums.
I got the job, and unbeknownst to me, stepped into a weird world that became one of my most formative experiences in tech, working at a company that pushed the boundaries of what corporate culture could be.
GitHub—which was acquired by Microsoft in 2018—announced this past February that, in addition to laying off 10 percent of its employees, it would permanently shutter all offices once their leases expired, including its beloved San Francisco headquarters. While this announcement may have looked like just another in a string of tech company office shutdowns, GitHub’s headquarters was notable both as a living testament to tech culture and as one of its first disputed territories, whose conflicts presaged the next decade of the tech backlash.
GitHub’s San Francisco office—spanning 55,000 square feet and christened with a ribbon-cutting ceremony attended by then mayor Ed Lee—caused a stir when it opened in the fall of 2013, even at a time when lavish startup offices were commonplace. The first floor was designed as an event space, complete with Hogwarts-style wooden banquet tables, a museum, a sweeping bar, and the Thinktocat, a giant bronze sculpture of GitHub’s mascot, the Octocat—a humanoid cat with octopus legs—in the pose of Rodin’s most famous work. Upstairs, there was a speakeasy, an indoor park, and a secret lounge, lined in wood and stocked with expensive whiskey, accessible through either a false bookshelf or the Situation Room, a conference room designed to look like the one in the White House.
Despite its opulence, the office was designed not to alienate but to make everyone feel like a “first-class citizen,” as early employee Tim Clem told InfoWorld at the time. GitHub cofounder Scott Chacon, who led the internal design process, explained to me that to lure local and remote employees in, instead of making mandatory in-office days, GitHub’s executives challenged themselves to design an office that was better than working from home. (It certainly worked on me. I generally prefer to work from home, but I came into the GitHub office almost every day.)
The Oval Office, for example, came about because Chacon and his colleagues realized that the lobby would be a place where visitors would be forced to sit and wait for five to 10 minutes— normally a boring or unpleasant experience. How could they create “the most interesting room” to wait in, which would help pass the time? As Chacon explains, “Most people don’t get a chance to sit in the Oval Office, but as an employee of GitHub, you could go there anytime you wanted.”
The office was a fun house that distorted the mind, not just with its flashy looks, but by playfully blurring the lines of hierarchy and power. Chacon’s comments reflect an organizational culture from GitHub’s early days, when there were no managers or titles. At the previous headquarters (“Office 2.0”), they flipped the rules of a private office that had belonged to the former tenant’s CEO, outfitting it with swanky leather chairs and declaring that anyone except executives could go in there. At Office 3.0, they connected the lighting and calendar systems, so that the lights would blink as the meeting approached its allotted time limit, then turn off completely—no matter who you were or how important your meeting was.
Sat, 27 May 2023 23:00:00 -0500en-UStext/htmlhttps://www.wired.com/story/github-tech-values/Twitter 'chose confrontation' on EU disinformation code
Twitter "chose confrontation" by exiting a voluntary EU disinformation code of practice that lays ground rules for an incoming European law on digital services, a European Union commissioner said Monday.
"We believe this is a mistake of Twitter. Twitter has chosen the hard way. They chose confrontation," commissioner Vera Jourova told journalists.
She said Twitter's compliance with the new Digital Services Act (DSA) entering force on August 25 "will be scrutinised... vigorously and urgently".
The European Commission announced May 27 that Twitter had decided to leave the code of practice, to which other major online platforms such as Google, Microsoft and TikTok continue to adhere.
The voluntary pact, which was launched in 2018 and strengthened last year with input from industry players, contains over three dozen pledges such as better cooperation with fact-checkers and not promoting actors that distribute disinformation.
It serves as a test feeder to the DSA, which will impose legal obligations on big platforms and impose penalties that could go up to six percent of a company's global revenues in case of violation.
Since Elon Musk bought Twitter for $44 billion in October, it has cut more than 80 percent of the workforce and got rid of many moderators who vetted content for disinformation and harmful messages.
Jourova said "I can't predict" what conclusions the commission might make about Twitter's possible distribution of disinformation once the DSA comes into force.
But she said that signatories in the code of practice would have an "easier situation" because they would already have cleared the "burden of proof".
"There is an interplay between the code of practice, which is a voluntary agreement, and the Digital Services Act, which is enforceable," Jourova observed.
"I would like to give Twitter the chance to defend the right to make business in Europe without any sanction," she said.
Mon, 05 Jun 2023 16:17:00 -0500entext/htmlhttps://tech.hindustantimes.com/tech/news/twitter-chose-confrontation-on-eu-disinformation-code-71685972916889.htmlWhat Is ‘Responsible AI’ And Why Is Big Tech Investing Billions In It?
The boom of artificial intelligence (AI) and super-intelligent computation has taken the world by storm. Pundits are calling the AI revolution a “generational event”—one that will change the world of technology, information exchange and connectivity forever.
Generative AI specifically has redefined the barometer for success and progress in the field, creating new opportunities across all sectors, ranging from medicine to manufacturing. The advent of generative AI in conjunction with deep learning models has made it possible to take raw data and prompts to generate text, images and other media. The technology is heavily based on self-supervised machine learning from data sets, meaning that these systems can grow their repertoire and become increasingly adaptable and appropriately responsive as they are fed more data.
Kevin Scott, Chief Technology Officer for Microsoft, writes about how AI will change the world, describing that generative AI will help unleash humanity’s creativity, provide new ways to “unlock faster iteration” and create new opportunities in productivity: “The applications are potentially endless, limited only by one’s ability to imagine scenarios in which productivity-assisting software could be applied to complex cognitive work, whether that be editing videos, writing scripts, designing new molecules for medicines, or creating manufacturing recipes from 3D models.”
WASHINGTON, DC - JUNE 19: U.S. President Donald Trump (2nd L) welcomes members of his American ... [+]Technology Council, including (L-R) Apple CEO Tim Cook, Microsoft CEO Satya Nadella and Amazon CEO Jeff Bezos in the State Dining Room of the White House June 19, 2017 in Washington, DC. According to the White House, the council's goal is "to explore how to transform and modernize government information technology." (Photo by Chip Somodevilla/Getty Images)
Getty Images
WASHINGTON, DC - APRIL 04: U.S. President Joe Biden (C) holds a meeting with his science and ... [+]technology advisors at the White House on April 04, 2023 in Washington, DC. Biden met with the group to discuss the advancement of American science, technology, and innovation, including artificial intelligence. (Photo by Kevin Dietsch/Getty Images)
Getty Images
Both Microsoft and Google are at the forefront of this development and have made incredible strides in AI technology in the last year. Microsoft has integrated the technology seamlessly into its search functions, in addition to creating platforms for developers to innovate in other useful areas. Google has also progressed significantly on this front, showing immense promise with its Bard platform and PaLM API.
However, the promise of endless possibilities brings with it immense responsibility.
Namely, the advent of generative AI has also raised numerous concerns regarding the best way to develop these platforms in a fair, equitable, and safe manner.
One of the primary concerns is regarding the creation of systems that can provide equitable and appropriate results. A few years ago, Amazon had to disband an artificial intelligence system that the company was trialing to streamline the recruitment process. In an attempt to introduce automation into recruitment, the company built an AI system that could sort resumes from candidates and help identify top talent, based on historical hiring data. However, a significant issue emerged: because the system was using patterns based on historical data, and given that the tech industry has been historically dominated by males, the system was increasingly selecting males to advance in the recruitment process. Although Amazon recruiters only used this system for recommendations and made final decisions themselves, they scrapped the entire program so as to ensure complete transparency and fairness in the process moving forward.
This incident highlighted a hallmark issue for developers: AI systems are only as good as the data they are trained with.
Recognizing the potential for such problems, Google has been incredibly proactive in its approach to development. Earlier this month at Google’s annual developer conference, executives dedicated an entire portion of the keynote to “responsible AI,” reassuring the audience that it is a key priority for the company.
In fact, Google is striving to be transparent about its safety measures, explaining key issues in developing AI responsibly: “The development of AI has created new opportunities to Improve the lives of people around the world, from business to healthcare to education. It has also raised new questions about the best way to build fairness, interpretability, privacy, and safety into these systems.” As a corollary to the conundrum Amazon faced, Google discusses the importance of data integrity and the inputs and models that are used to train AI systems: “ML models will reflect the data they are trained on, so analyze your raw data carefully to ensure you understand it. In cases where this is not possible, e.g., with sensitive raw data, understand your input data as much as possible while respecting privacy; for example by computing aggregate, anonymized summaries.” Additionally, the company emphasizes that users must: understand the limitations of data models, repeatedly test systems, and closely monitor results for signs of bias or error.
Similarly, Microsoft has invested a significant amount of effort in upholding responsible AI standards: “We are putting our principles into practice by taking a people-centered approach to the research, development, and deployment of AI. To achieve this, we embrace diverse perspectives, continuous learning, and agile responsiveness as AI technology evolves.” Overall, the company states that its goal for AI technology is to create lasting and positive impact to address society’s greatest challenges, and to innovate in a way that is useful and safe.
Other companies innovating in this arena must be equally invested in developing these systems in a responsible manner. The development and commitment to “responsible AI” will undoubtedly cost tech companies billions of dollars a year, as they are forced to iterate and re-iterate to create systems that are equitable and reliable. Although this may seem like a high cost, it is certainly a necessary one. AI is both an incredibly new yet powerful technology— and it will inevitably upend many industries in the years to come. Therefore, the foundation for the technology must be strong. Companies must be able to create these systems in a way that fosters deep user trust and truly progresses society in a positive manner. Only then will the true potential of this technology be unlocked to become a boon rather than bane to society.
Mon, 29 May 2023 13:32:00 -0500Sai Balasubramanian, M.D., J.D.entext/htmlhttps://www.forbes.com/sites/saibala/2023/05/29/what-is-responsible-ai-and-why-is-big-tech-investing-billions-in-it/Getting started with MQTT in Azure Event Grid
MQTT is an important technology for the industrial internet of things (IIoT), building on concepts from IBM’s venerable MQ Series message queue technology. MQTT was initially designed to deliver telemetry from SCADA control systems, with IBM handing the protocol over to the OASIS standards body in 2013.
The standard is deliberately intended to evolve slowly, as it’s embedded in industrial device firmware, and used in hardware that may not get updates—ever. That’s because organizations typically deploy not just tens, or even mere hundreds of MQTT-enabled systems, but many thousands. Plus, MQTT devices are often deployed in inhospitable and hard-to-reach environments, like undersea pipelines, with rollouts often lasting years. MQTT is also relatively simple, with implementations for most common microcontrollers.
MQTT support in Azure Event Grid
Because MQTT is a publish-and-subscribe protocol, where endpoints publish messages that listeners subscribe to, it’s an obvious fit for Azure Event Grid, Microsoft’s pub-sub message handling service. Designed to scale to support massive device deployments, Event Grid is perhaps best thought of as a message routing broker, supporting IIoT and other event-driven applications, feeding events from devices to your applications and to Azure services. While Event Grid is perhaps best known for its implementation of the Cloud Events protocol, the service is able to support many different messaging standards. (Read more about Azure Even Grid here.)
Azure Event Grid’s protocol support now includes a public preview of MQTT, with support for MQTT 5 and MQTT 3.1 unveiled at BUILD 2023. MQTT support for both incoming and outgoing messages means Event Grid can serve as the hub of an IIoT control system. Events sourced from edge devices can be used to deliver new events to both sources and MQTT-ready applications, as well as in Azure’s own stream analytics tooling. Those events could also be stored in Azure Data Lake, where analysts can use tools like Data Explorer to extract insights from device data and use that data to train machine learning-powered control systems.
Azure Event Grid is an important component of any large IoT infrastructure, whether you are supporting consumer or enterprise devices. That’s partly because it’s an implementation of a many-to-one messaging pattern, allowing architectures to consume many thousands of inputs with Event Grid as message manager. Because Event Grid is a two-way architecture, applications can use it to broadcast alerts and information to selected clients. You can even use Event Grid as a relay so that a message from one client can be broadcasted to all clients, or to a distinct subset. Microsoft has developed a reference architecture to show how Event Grid might be used in practice.
The result is a flexible way of connecting many devices in a hub-and-spoke network, where clients and services are linked by a scalable broker that manages authentication and authorization, reducing the work needed to build and secure services, and encapsulating functionality in defined namespaces. Namespaces are a useful tool for managing messages at scale, as they allow you to group clients and then wrap their associated subjects into syllabu spaces. This then lets you apply permissions at a granular level so that clients need authorization before they can publish or subscribe to a topic.
Using MQTT in Azure Event Grid
Once MQTT messages are delivered to Azure Event Grid they can be routed to Azure services using built-in APIs. Custom services and your own code can use webhooks to receive messages and then process them accordingly. Some MQTT message types and features aren’t yet supported by Event Grid. One missing feature, support for message ordering, might cause problems. If so, you will need to add your own code to ensure that messages are processed in the correct sequence.
There’s a lot of scale in Azure Event Grid’s MQTT support. Each namespace handles up to 200,000 MQTT clients, delivering 20,000 messages a second. That’s only the preview release, too, as Microsoft has documented plans to rapidly increase this to 1 million clients and 100,000 messages a second.
Working with Event Grid is relatively simple. You can accomplish most of the tasks in the Azure Portal, though you can use the Azure CLI if you prefer (and if you want to build reusable scripts for future operations).
Building a MQTT broker
As your endpoint devices will be using MQTT to connect to Azure resources, start by opening port 8333 in your firewall, both from your network and into the Azure VNet used for your application. This is the standard port for MQTT and should allow any compliant device to connect to your Event Grid. It’s a good idea to use an X.509 certificate to authenticate client connections. You can generate X509 certificates using a tool like the open-source Step certificate authority on most platforms, including Windows, Mac, and Linux.
In the Azure Portal start by creating an Azure Event Grid Namespace in the resource group you’re using for your MQTT application. Namespaces are DNS entries, so they need to be unique to an Azure region. It’s a good idea to use names that are related to the purpose of the application you’re building, so use the name to tag it as one that supports MQTT. Finally, choose a region for the namespace before creating it.
Once the namespace has been created, enable MQTT support from its configuration page in the portal. You can now start to add clients to your Event Grid. For simple test applications with a handful of clients, you can add them using the UI, but for larger deployments consider automating the process with a script and using generated names for clients. You should tag each client with the thumbprint from your X.509 certificate, as this will be used for authentication.
You can now start to add syllabu Spaces to your Event Grid, along with the filters used to select messages that have been published into its topics. With syllabu Spaces in place, you can add permissions for clients, giving them publisher access to the syllabu space.
Configuring MQTT clients
Clients will need to be configured with the certificate you’ve created and will then use the syllabu names you’ve added to Azure Event Grid to publish messages. You’re now ready to add subscriptions to these topics, for example adding a connection to an Event Hub in Event Grid to automatically translate MQTT content into other formats, such as Cloud Events. This is perhaps one of the more useful aspects of Azure Event Grid’s MQTT support, as it moves messages and events out of the realm of operational technology into more flexible protocols.
Of course, if you’re working with existing MQTT systems, you can configure your Event Grid with the fingerprint of existing certificates. This will allow you to upgrade without major updates to device firmware.
As Event Grid is a publish-and-subscribe service in its own right, routing to alternative protocols requires mapping your MQTT subjects to Event Grid topics. As part of this process you’re able to add new properties to messages, which can be used to enhance Event Grid-sourced Cloud Events messages received by tools like Azure Event Hubs, for example providing content metadata that isn’t present in a basic MQTT message from a remote device that can then be used by Stream Analytics or as additional labels for a machine learning model.
Adding MQTT support to Azure Event Grid is a sensible move by Microsoft’s Azure IoT team. With long-lived industrial devices an essential component of any IoT platform, MQTT support will allow companies to quickly migrate existing device deployments to a cloud native environment. And by making use of AI support for device monitoring, they can spot outlying readings and use them to drive control systems and to order predictive maintenance. The result could well be a significant upgrade for any large-scale messaging environment.
Sun, 04 Jun 2023 21:08:00 -0500entext/htmlhttps://www.infoworld.com/article/3698274/getting-started-with-mqtt-in-azure-event-grid.htmlHuman Extinction From AI is Possible, Developers WarnNo result found, try new keyword!"AGI fear-mongering is overhyped, toxic, likely to lead to regulatory capture by incumbents, and can slow down or hinder the positive applications of AI across society including biological science and ...Tue, 30 May 2023 12:40:00 -0500text/htmlhttps://www.thestreet.com/technology/human-extinction-from-ai-is-possible-developers-warnWhy Rubrik Is Looking to Break Cybersecurity's IPO Dry Spell
Data Protection Titan Could Raise More than $750M Through 2024 IPO, Reuters ReportsMichael Novinson (MichaelNovinson) • June 5, 2023Bipul Sihna, co-founder and CEO, Rubrik (Image: Rubrik)
The deep freeze in cybersecurity initial public offerings could at last be thawing.
2021 was a banner year for IPOs in the market, with KnowBe4, Darktrace, SentinelOne and ForgeRock all taking advantage of pandemic-driven demand for security technologies to go public. But a reversal of economic fortunes over the past year has done a number on these companies, with KnowBe4 getting bought by Vista Equity, ForgeRock inking a deal with Thoma Bravo, and Darktrace and SentinelOne trading below their IPO prices.
Companies that eschewed the initial public offering in favor of merging with or being acquired by a shell company that was already public haven't fared any better. Risk analytics platform Qomplx called off its SPAC merger, Appgate and IronNet have conducted steep layoffs and changed CEOs since going public and ZeroFox and Hub Security have seen dramatic stock price declines since going public via a SPAC.
Despite the beating new publicly traded security companies have taken during the economic downturn, one high-flying data protection vendor is looking to test its luck in the public market. Reuters said Monday that Silicon Valley-based Rubrik is working with Goldman Sachs, Barclays and Citigroup in preparation for an initial public offering that could take place in 2024 if the market becomes more welcoming.
"We are going after observing the core data to understand the security threat." – Bipul Sinha, co-founder and CEO, Rubrik
Rubrik currently generates annual recurring revenue of about $600 million and may raise more than $750 million in its IPO, sources told Reuters. The company in 2021 got an investment from Microsoft in the low tens of millions that valued Rubrik at $4 billion, Bloomberg reported. Citigroup declined to comment, while Rubrik, Goldman Sachs and Barclays didn't respond to requests for comment.
What Makes Rubrik A Compelling IPO Candidate?
The firm has raised more than $550 million since its founding in 2014, including a $261 million Series E funding round at a $3.3 billion valuation that helped Rubrik move into security and compliance. Despite the economic headwinds, the company has increased its headcount by 19% over the past year to 3,334 employees, with the most aggressive growth coming in its sales and operations organizations.
"We are going after observing the core data to understand the security threat," Rubrik co-founder and CEO Bipul Sinha told Information Security Media Group in September. "As a result, our customers are not only doing the initial purchase, but they are also expanding with us rapidly."
Rubrik is well regarded by analyst firms, with Forrester in December recognizing it as a leader in data resilience alongside Commvault and Cohesity. Forrester praised Rubrik for integrating signals found in the backup process with leading SIEM and SOAR tools, but chided Rubrik for forcing customers to work with its customer success function on a regular basis to qualify for the ransomware recovery warranty (see: Commvault, Rubrik, Cohesity Lead Data Resilience: Forrester).
Similarly, Gartner in August called Rubrik a leader in enterprise backup and recovery software alongside Veeam, Commvault, Veritas, Dell and Cohesity. Gartner praised Rubrik for large enterprise adoption, ransomware protection and recovery features and ease of deployment and use, but cautioned about limited SaaS backup, narrow NAS Cloud Direct integration and ending its evergreen hardware program.
"Customers come to Rubrik when they have a security focus," Rubrik VP and Head of Products Vasu Murthy told ISMG in December. "If they're afraid of ransomware and they want to Improve the security of their systems, Rubrik is their No. 1 choice."
Artificial Intelligence and More Acquisitions on the Horizon?
In accurate months, Rubrik has looked for ways to apply artificial intelligence within its own organization given the challenges humans face when attempting to understand, correlate, find the cause of, analyze and fix security incidents. The company counts Allstate, KeyBank, Honda, the Denver Broncos, Nvidia, Adobe, Sephora, The Home Depot, Harvard and New York University among its 5,000 customers.
"AI and ML can also be used for good to understand the intent of a particular event, if the event correlated with a broader set of activities, if it could potentially be a zero-day or an unpatched vulnerability, and where humans can intervene to solve the problem," Sinha told ISMG in April.
One potential area of expansion for Rubrik is public cloud data observability, where Calcalist said the company and data protection rival Datadog are kicking the tires of a buy of Laminar for between $200 million and $250 million. Rubrik hasn't been shy about doing acquisitions to broaden its technological footprint, making buys in the unstructured data management and infrastructure automation spaces (see: Why Datadog and Rubrik Are In Talks to Buy Laminar for $200M).
Come this time next year, Rubrik might have more dry powder to pursue acquisitions as a newly minted public company.
Mon, 05 Jun 2023 10:31:00 -0500entext/htmlhttps://www.inforisktoday.com/blogs/rubrik-looking-to-break-cybersecuritys-ipo-dry-spell-p-3456EU Digital certified To Put Twitter Under 'Stress Test' This Month
KEY POINTS
EU's digital chief Thierry Breton said up to 10 digital certified will conduct the stress test
A top EU official recently slammed Twitter for choosing 'a hard way' to comply with EU rules
The French digital minister has threatened to ban Twitter if it refuses to follow the bloc's rules
Twitter will be subjected to a "stress test" by European Union digital certified this month, Thierry Breton, the European Commissioner for the internal market, said Thursday.
Breton, who has repeatedly called on the social media platform to adhere to the bloc's tech regulations, said in an interview that a team of about five to 10 digital certified from the EU will put Twitter and possibly other tech companies under "stress tests" late this month, The Wall Street Journal reported.
The French business executive clarified that the stress test is voluntary and does not have enforcement or monetary consequences, but it will give Twitter an idea of how the bloc's Digital Services Act (DSA) will be enforced.
Breton's comments come days after he revealed that Twitter has left the EU's voluntary Code of Practice against disinformation. The EU's digital chief warned that even if the social media platform pulled out of the disinformation code, "obligations remain."
"You can run but you can't hide," he said.
Vĕra Jourová, vice president of the European Commission, said "bye, bye" to Twitter, adding that the platform "has chosen a hard way to comply with our digital laws."
"Russia's disinformation is dangerous and it is irresponsible to leave EU's anti-disinformation Code," she said.
Jourová went on to reveal that the Code "remains strong" and this month, she will meet with signatories "so we can step up our actions" ahead of the elections.
As of June 1, Twitter was still listed as a signatory of the EU disinformation code alongside other prominent American tech companies such as Meta, Microsoft, Google and Twitch.
Earlier this week, France's digital minister Jean-Noël Barrot said Twitter may be banned in the EU if it refuses to follow the bloc's digital platform rules.
"Disinformation is one of the gravest threats weighing on our democracies. Twitter, if it repeatedly doesn't follow our rules, will be banned from the EU," Barrot said as per a translation by Politico.
Breton and Musk have held two video calls since the Tesla CEO took over the social media platform in October last year.
During the January video meeting, Breton warned that "the next few months will be crucial to transform commitments into reality," Reuters reported. He said the EU wants to see "progress towards full compliance with the DSA."
Twitter's withdrawal from the EU's disinformation code is a contradictory move to Musk's earlier comments about the DSA being "exactly aligned" with his thinking, as he said in a meeting with Breton weeks after his plan to purchase Twitter surfaced.
"It's been a great discussion ... I agree with everything you said, really," Musk told Breton in a video that the EU official shared on Twitter.
The tech billionaire also replied to the video saying it was a "great meeting" with Breton. "We are very much on the same page," he said.
Twitter's relationship with the EU has been on the rocks in accurate months as officials raised concerns about the company's content moderation, disinformation and journalist ban.
Barrot and Jourová previously called out the social media platform for the sudden ban of some journalists in mid-December. The banned journalists were from CNN, The Washington Post and The New York Times, as per Politico.
After the ban on journalists, French industry minister Roland Lescure said he was temporarily leaving the platform to protest Twitter's move.
Some German officials also criticized the move, with the German Foreign Affairs Ministry saying press freedom should not "be switched on and off arbitrarily."
In March, Twitter insiders told BBC that the platform could no longer protect users from disinformation, hate and child sexual exploitation after mass layoffs and changes at the company since the Musk takeover.
One employee told the outlet that harassment campaigns that targeted freedom of expression were going "undetected" on the platform.
An employee only identified as Sam said the chaos within Twitter was driven by the massive disruption in the workforce as many were laid off and others left the company after Musk took over.
An April study found that hate speech increased across the platform since October 2022 and the daily use of hate speech by accounts posting hateful content nearly doubled after the tech billionaire's takeover.
Musk has denied that hate speech increased across the platform in an interview with BBC's James Clayton in April.
The bird will get a preview of how the EU looks to enforce its strict digital laws this month as the platform seems to have defied the bloc by withdrawing from the EU disinformation code.ReutersThu, 01 Jun 2023 16:12:00 -0500Marvie Basilanen-UStext/htmlhttps://www.ibtimes.com/eu-digital-specialists-put-twitter-under-stress-test-this-month-3697215Hands on: Apple Vision Pro: I just wore the future
Perhaps it was the moment a virtual butterfly effortlessly landed on my extended finger, or maybe it was the dinosaur's snaggle-toothed maw that came within inches of my face, or even the mountaineer who balanced barefoot on a thin cable pulled taut across a vast ravine. In truth, it was all of those experiences with Apple's stunning Apple Vision Pro spatial computing headset that convinced me I'd just experienced the true future of VR.
I know what you're thinking, "Dude, talk to me when you've washed the Apple Park koolaid out of your system." That's fair. I'm just hours from the moment Apple unveiled its first VR/AR wearable on the WWDC 2023 stage. It was a good presentation but it's hard to convey the power or experience of using VR through a canned presentation; 2D video is not equal to the task.
Think about all those Meta Quest launches with CEO Mark Zuckerberg and his often half-bodied friends wandering around the Metaverse. I mean, it sounds cool but... OK, even the experience of the Metaverse isn't equal to the idea of the Metaverse. Still, to give Meta and other VR purveyors their due, you don't know how good the HTC Vive Pro or Meta Quest Pro are until you try them.
I've tried them all and they all have their moments. They also have their limits. The visuals don't always hold together. Outside-in features like passthrough video are grainy or the representation of your hands and fingers is too cartoony. The headsets are usually too heavy or uncomfortable to wear for more than 15 minutes at a time.
Materials and design
Spatial audio is delivered though the speakers on either side of your head.(Image credit: Future / Lance Ulanoff)
With Apple Vision Pro, a project that was literally years in the making (and it shows), Apple didn't approach these problems by trying to find a midpoint accommodation. Instead, it clearly went all out, sparing no experience on components, design, and material. The result is a mixed-reality headset that is at once familiar and wholly distinct.
It still looks like an expensive pair of ski goggles, but the face is actually an extraordinary piece of glass. In immersive mode and with a kaleidoscope of rainbow colors, the face of it looks like Siri's cousin (yes Siri is integrated, no I didn't get to try it). In passthrough mode, it can show your eyes or maybe a video of your eyes since there's clearly no way to see all the way through the device. They called this feature "Eye Sight." It reminds me of what someone looks like when wearing those old joke X-ray glasses.
The rest is a fabric-covered body and brushed aluminum frame (that joins perfectly with the glass face) and a wide mesh strap on the back to secure it to your head.
Inside is not one but two Apple silicon, the M2 and the new R1 (one handles processing, while the other makes sure the spatial experience is top notch).
You obviously could glean much of this from the presentation and various news reports about the new Vision Pro. I want to tell you about using it.
Apple Vision Pro pricing and availability
(Image credit: Future / Lance Ulanoff)
Apple announced its Vision Pro headset on June 5, 2023, at WWDC 2023. It will cost $3,499 in the US and ships sometime next year (2024). Availability and price in other markets is yet to be confirmed, but Apple says that will follow in 2025.
Apple Vision Pro setup
Detail of Apple Vision Pro's Digital Crown (left) and mesh band.(Image credit: Apple)
Unlike a typical product hands-on where you can do whatever you want with the latest technology, Apple's roughly 30-minute Vision Pro demo was a guided experience. That's not to say I didn't use it. I did, but Apple was quite prescriptive in what I should do. In the end, though, I thought they helped me experience the best of it and maybe gave me my best AR/VR experience ever.
Before I could don the roughly 1lb (453g) headset, though, Apple asked me to get my eyes checked or, rather, my glasses. I walked into a small room where a polite gentleman asked to see my glasses and quizzed me about my eyesight. Was there anything usual about it beyond my progressive lens glasses? I told him there wasn't and he proceeded to stick my eyewear into a system that looked like it belonged in an optometrist's office.
Apple would use this information to select the right set of Zeiss lens inserts for the Vision Pro. Apple told me they had just a fraction of the lens options that would eventually be available to potential Vision Pro customers.
Also in preparation for my first experience, Apple had me use an iPhone to scan my face (in a fashion similar to Face ID registration) and my ears for an accurate spatial audio experience.
With all this done, I waited another 20 minutes before I was led into a room where a pair of Apple representatives would guide me through wearing and using Apple Vision Pro.
Also in the room was a lone Vision Pro. I noticed right away the cable snaking under it to a small but dense battery pack (roughly two hours of battery life) that looked a little like the back of the original iPhone. I also noticed the extra band that ran from one side of the headset to the other that would, it turned out, offer a crucial bit of support when I wore it.
I was not allowed to take photos of myself wearing the Vision Pro headset or capture any of the images I saw (though I promise they would not do it justice.)
Let's get it on
(Image credit: Future / Lance Ulanoff)
Lifting the headset, I noticed that it seemed smaller and lighter than competing VR headgear from Meta and even HTC. Keeping the battery outside of the body is a less-than-Apple-like move, but I think it's ultimately the right one. A gram or two more in the body and maybe Vision Pro isn't so comfortable to wear.
I was instructed to grab the Vision Pro by the area that would sit on the bridge of my nose and by the wide mesh on the back. I placed it on my face and it started to slide down my nose. There's a wide ring near the back that I then used to tighten Vision Pro on my face. I was cautioned not to make it tight. I twisted the dial until the headset felt firm and, more importantly, balanced on my nose and forehead. Then I tightened the strap that ran over the crown of my head. That was what did the trick. Now the Vision Pro felt snug and comfortable.
This is where things got interesting.
The system launched with the classic Mac "hello" drawing out in 3D script in front of me. It's a nice touch that neatly ties this new hardware category to Apple's iconic product history.
Apple Vision Pro is mostly controlled via gesture and vision tracking but, in order for that to work, I had to go through a brief setup routine. For the eye tracking, I was instructed to keep my head still and then look at a series of white dots that floated one at a time before me. I did this twice. Next, I held up my hands in front of my face. The system's multitude of cameras registered them in seconds. As it did so, I noticed a faint, shimmering glow around them.
(Image credit: Future / Lance Ulanoff)
I can see clearly now
Apple Vision Pro will show someone a video of your eyes, which is not weird at all.(Image credit: Apple)
Did I mention that the Vision Pro has excellent passthrough vision? Instead of a grainy or limited view of the outside world, it looked almost as if I was peering through clear glass. Throughout my demo, this view would purposely phase in and out of visual existence. Apple intentionally does what it can to keep the real world and people around you visible, unless you are in fully immersive mode.
There are a pair of buttons on the Vision Pro. The one on the upper left is for capturing spatial (3D) photos and videos. I never used that. On the other side is the Digital Crown. Yes, it's just like the Digital Crown on an Apple Watch but larger. When I heard Apple included this on the Vision Pro, I was skeptical. Why would Apple bring a tiny Apple Watch part to its VR headset? Turns out, it's a near-perfect way to augment gesture controls.
I pressed the Digital Crown to reveal the headset's main interface (you can also press and hold it to recenter the main interface). As I scanned the small set of horizontally aligned apps that looked like a cross between iPad and iOS icons, I noticed that each one kept almost pulsing forward toward me. It took me a beat to realize that they were each reacting to my gaze.
One of the Apple reps explained how I would control Vision Pro through gaze, combined with pinches and horizontal and vertical pulls. Basically, if I looked at an app like Photos, I could then pinch together my thumb and index finger to open it. To scroll in a window, I would pinch, hold and drag my hand left or right or up or down. There was never a need to carefully hold my hand in front of my face. Usually, I was resting my right hand in my lap when I made a pinching gesture.
Apple Vision Pro battery pack(Image credit: Future / Lance Ulanoff)
I eventually used these gestures to open and close windows, select photos, and scroll through open Safari pages. With little guidance, I quickly moved various app windows (Messages – I sent a text message without any guidance, Photos, Safari) all around the room to create the biggest desktop I'd ever seen.
I remember years ago my first experience using Microsoft Hololens. I liked it. It was a big leap forward in the world of mixed reality, but there was also no getting past its limits like a narrow field of view and sometimes ineffective gesture controls.
Apple Vision Pro's field of view is, by contrast, as big as the space around you. Interface screens and massive windows appeared to fill the room.
In the Photos app, I looked at 12ft photos and panoramas that basically wrapped almost fully around me. Apple, I thought, had finally found a use for all those panoramic photos we've been taking for almost a decade.
One of the odder experiences was a FaceTime call with an Apple spokesperson. The call alert appeared way at the top of my virtual screen and almost out of view. I looked up at it and pinched to open the call where I was greeted by an eerie-looking "Persona" avatar. This is the 3D scan of a face that you can capture with the Vision Pro and then use in your own Vision Pro-based FaceTime calls. As the Apple rep explained, the avatar represents where she looked, and her expression, and lip-synced her voice with the avatar's animated mouth. It looked not quite human and was my least favorite part of the experience.
On the bright side, she showed me how we could collaborate through FaceTime on a Freeform project board. It was an impressive bit of collaboration in 3D space.
Apple Vision Pro Personas look just as odd as you thought they would.(Image credit: Future)
Imagery in general is fantastic thanks to the dual micro-LED displays that provide, according to Apple, 23 million pixels of imagery. What I noticed is that no matter where I looked or even if I glanced at the edge of the images, they never distorted or faded. I did see some light leaks, especially in dark images but never when I was fully immersed in some experience. Apple said they expect to have more custom light seal options when Vision Pro ships.
All the photography, much of which was taken with an iPhone, looked great, but we slipped just a tad into an uncanny valley with Spatial Photos and video. I'm not saying these images and videos of a child blowing out her candles (with smoke virtually hitting my face) and friends gathered around a fire pit didn't look real. They looked more than real. It was alike a postcard from Minority Report but instead of pre-cogs, we have a past-cog Vision Pro letting us relive moments like never before.
Things only got wilder from there.
Apple showed me (or guided me to open) Environments, which are like 360º photos or backgrounds. I could use the Digital Crown to dial up or back the level of immersion. It was at this moment that I happened to look down at my hands. They were resting on my knees which had, well, disappeared. Instead, my real hands and forearms were resting on the immersive background. It was a startling effect.
Entertain me
Vision Pro will clearly have many uses. It has the potential to be the ultimate productivity environment, allowing you to use a physical keyboard and mouse with a massive virtual desktop (I did not get to try any physical devices with it), and has obvious entertainment potential.
Apple directed me to launch a 30-second 3D clip of Avatar 2, which loaded up on a large screen that floated in our meeting space. The movie looked beautiful, vibrant, and enchanting. The 3D effect was quite good. Then we selected a Cinema Environment and the room faded to black and it looked as if Avatar 2 was playing on a giant movie house screen. This significantly enhanced the overall 3D effect.
There is, however, a big difference between watching a movie and, essentially being inside of one.
Apple's Immersive Video is Apple's own spin on immersive VR experiences. A series of short clips put me feet away from Alicia Keyes singing a set in a small studio, in flight over mountains and skyscrapers, and apparently standing on a thin cable over a ravine. On that last bit, I wasn't alone. Right in front of me with her arms outstretched stood a barefoot woman carefully balanced on the cable. It was thrilling and terrifying.
This is different
Believe me, this is much scarier in person.(Image credit: Future / Lance Ulanoff)
My final Vision Pro experience, though, was unforgettable. I opened an app called Encounter Dinosaurs and then watched as the wall in front of me parted to reveal a barren and ancient landscape. I spotted a small butterfly on one of the rock outcroppings. The Apple rep told me to hold out my finger. I did as I was told and the butterfly alighted, fluttered, and then flew to me, carefully landing on my outstretched digit. I gasped.
The butterfly flew off and then a small dinosaur crawled out of a crevice. The spatial audio, which had been so good and so subtly on point throughout my demo experience, made it obvious that something large was approaching from the left, just behind the office wall. A large reddish-brown dinosaur thunderously emerged. It glanced about and then looked directly at me. Soon it was walking toward me and I reflexively pressed my back into the couch. The Apple reps encouraged me to get up and approach the dinosaur. Of course, I had to pick up the small battery back.
I stood and, since I could still see the whole room around me, walked around the coffee table. The dinosaur watched me, warily tracking my movement. Soon my face was just inches from its toothy mouth and snout. It was amazing, a dinosaur lover's dream. Then the dinosaur retreated back to the rocky surface and then the wall closed as the dinosaur let out one last roar.
Apple Vision Pro with battery pack.(Image credit: Apple)
Early verdict
Apple has built the first lust-worthy VR headset. It's beautiful to look at and gets most of the key VR and AR experiences right. Even now, months from release, it's already the most intuitive VR interface not yet on the market. The eye and hand tracking are already excellent. I think spatial photos and, especially video, may change how we engage in memories,
There are some issues and hurdles.
The EyeSight display that can show you someone's virtual eyes is probably a mistake, as may be the weird 3D Persona avatars. It's also, at $3,499, wildly expensive, and could be more so if you need to buy Zeiss lens inserts (maybe one set for each family member). Apple may struggle to justify that expense unless it can get a whole lot more people to experience it. It's quite possible that by this time next year, there will be dozens of Apple Vision Pro headsets available at Apple Stores around the world for hands-on experiences.
That may change a lot of minds and turn the Apple Vision Pro into the ultimate wishlist VR product. Can it catch up to the popular Meta Quest or HTC Vive? That all depends on how willing people are to part with thousands of dollars for what is clearly an extraordinary, premium-level VR experience.
Mon, 05 Jun 2023 17:12:00 -0500entext/htmlhttps://www.techradar.com/reviews/apple-vision-pro-i-just-wore-the-futureAI in retail: Smarter stores, smarter product design
This article is part of a VB Lab Insights series on AI sponsored by Microsoft and Nvidia.Don’t miss additional articles providing new insights, trends and analysis on how AI is transforming organizations and industries. Find them all here.
Retail is stocking up on AI. Analysts say the sector’s spending in 2023 will outpace all others except banking. A 40% adoption rate is projected to double by 2025, making retail the industry most heavily invested in intelligent technology.
Companies are turning to AI to handle a long list of challenges that would keep any business leader awake at night: brick and mortar revenues impacted by consumer behavior changes, worsening shortages of staff and rising labor costs, supply chain disruptions, heavy pressures on profit and costs (including inflation and double-digit increases for customer acquisition), and massive loss due to theft and organized crime.
Beyond these pressing issues, retailers expect that AI-driven analytics and applications can help them navigate major long-term changes. Chief among them are shifts in buyer demographics (more diverse, digitally savvy, older), consumer values (price and convenience over brand loyalty), sales channels (rapid growth of ecommerce, mobile and social) and growing demands for global sustainability.
Against this backdrop, here’s a look at two areas of high-value AI innovation in retail.
Intelligent stores: Fighting loss, optimizing business
Despite more than three decades of online growth, physical locations remain important anchors for many retailers. While the two worlds continue to meld, business and technology leaders are focusing on ways to evolve, differentiate and optimize the customer experience (CX) and business performance of stores. For many, loss prevention, also known as asset protection, tops the priority list.
According to the National Retail Federation (NRF), retailers worldwide forfeit more than $100 billion each year to “shrinkage”, the industry term for inventory theft, loss and waste. More than half occurs in North America. Average shrink rates exceed 1.5% of revenues, so for a $20-billion grocer, that is a hefty $300-million yearly hit. COVID and inflation further aggravated the problem: In a accurate survey, 89.3% of respondents reported increases in violence, shoplifting (73.2%) along with employee theft and organized retail crime (71.4% each).
To fight back, more retailers are adopting intelligent video analytics (IVA) technologies that can accurately and efficiently reduce shrinkage. New AI-driven systems help prevent losses in real-time, Improve asset protection at points-of-sale, reduce shoplifting storewide, and ensure the safety of employees and customers, who bear the cost in higher prices.
Fromwarehouse to checkout, Everseen sees everything
Everseen, an international software company based in Ireland, has developed computer vision AI systems that help retailers see and solve shrinkage problems in real time. Alan O’Herlihy, CEO and founder, says using this AI at the edge effectively transforms an entire physical retail space into actionable data that can drive better decisions. Here’s how it works:
Running on the NVIDIA AI platform on Microsoft Azure, the solution modules integrate with a retailer’s existing cameras, point-of-sale, computers and other business systems. Doing so provides an end-to-end view across their entire supply chain — from warehouse to store to shelf to checkout — that can pinpoint gaps in inventory and other problems requiring immediate attention. The AI then recommends a “next best” action — all in real time.
For example, Evercheck, the company’s point of sale (POS) solution, instantly detects and corrects both deliberate and unintentional errors at staffed and self-checkout lanes. For the latter, an instantaneous “nudge” (delivered within 300 milliseconds) prompts shoppers to correct mistakes such as an unscanned item, reducing the need for staff intervention and potential conflict down to 2%, from 20%. Another AI-powered product, Everdoor, reduces loss and improves process compliance in the stock room.
Analyzing 275 years of video data daily
All told, Everseen each day analyzes in real time a staggering 275 years of diverse and labeled video data. The company monitors unstructured data from 22 million customer interactions with 220 million products. O’Herlihy says the insights and actions derived are invaluable in helping retailers reinvent related business processes. That, in turn, can yield a host of benefits, he says, including increased revenues and sales throughput, reduced costs, mitigated risk, better customer experiences and optimized operations in distribution centers.
According to Everseen, more than half of the world’s top 15 retailers have adopted the company’s AI-based loss prevention systems, a total of 6,000 stores and 80,000 checkout lanes. Says O’Herlihy: “The goal of our AI is to reduce the friction for ‘green actors’ and increase friction for ‘red actors’ by dynamically delivering intuitive fixes and split-second decision-making.”Moreon Everseen’s seamless shopping.
Other emerging uses of AI in intelligent stores:
Optimizing layout and experience. Leading firms are exploring how digital twins and simulation can create smoother experiences for customers and employees. Lowe’s uses AI-driven simulation with NVIDIA Omniverse to enhance store layout, optimize merchandising and Improve employee productivity. The same technology helps Kroger design the best customer shopping experience, including fast and easy checkout.
In-store ads and promotions. Intelligent targeting delivers live shopping suggestions that can expand cart size through opportunities for upselling and cross-selling. Dynamic digital signage, such as that delivered by Cooler Screens, automatically updates to offer promotions tailored for every shopper and creates additional revenue opportunities tied to dynamic in-store displays. See more here.
Sensors capture real-time consumer data and analytics from digital cooler screens, which helps retailers better understand buying patterns while preserving individual privacy. This picture shows a heat map of where shoppers focused their attention. Credit: Cooler Screens
Autonomous shopping. Smart “grab-and-go” stores, where customers use their mobile phones to check out, are fast gaining popularity. New approaches include AI-enabled shopping carts, nano stores, smart cabinets and fully autonomous stores. All solution providers, including AiFi and AWM, seek to give customers a more “frictionless” and faster shopping experience that boosts retailers’ revenues and margins.
Generative AI: Pinpoint design for real and digital fashion
Generative AI like DALL·E and ChatGPT can be used to create new designs for products based on customer feedback, sales and market trends and other data. Taking advantage of these tools can help retailers develop new products that are more appealing to buyers and better aligned with market demand.
Startup Fashable is pioneering the use of generative AI to create sustainable fashion designs without the need for fabric.
Unsustainable manufacturing, unsold inventories and long production cycles are common (and costly) problems in fashion. While a high-end designer might take months or years to design and produce a collection, “fast fashion” brands do so with a fraction of the time and cost, thanks to inexpensive materials and labor. But what happens when clothing production goes up while its lifecycle goes down? A growing landfill problem. In the U.S. alone, 21.6 billion pounds of textile waste get trashed every year.
So, in 2021 the Portugual-based startup, led by co-founders experienced in software engineering and AI research and development, envisioned a disruptive AI application. It would create dozens of original and realistic clothing designs and fashion content in minutes without using any material. The pair believed that a smart, all-digital approach would help fashion companies better meet customer demand, get to market faster and reduce fabric and clothing waste, explains co-founder Orlando Ribas Fernandes.
Entire digital collections with a few clicks
The Fashable team created its AI algorithm on Azure AI Infrastructure powered by NVIDIA A100 Tensor Core GPUs, Azure Machine Learning, an enterprise-grade service for the end-to-end machine learning lifecycle and PyTorch, an open-source machine learning framework. The system lets designers quickly generate endless digital options for fashion in the metaverse or the real world, such as the shirt below. More technical detail here.
Fashable AI is composed of different neural networks. These ingest data from multiple sources to learn about trends, styles and clothing types. The models are constantly learning “what’s in” and “out.” Soon, these capabilities will enable co-creation of fashion in real time. Designers could, for example, visually change a digital design to shorten the sleeves of a dress or change a pattern from stripes to polka dots.
In one click, Fashable AI can create an entire collection. Designers can take their pieces to social media to A/B test directly with customers to gauge interest and forecast demand before going into production. Where it used to take months to get a new collection from design to department store, with Fashable it now takes minutes — with far less labor and no guesswork.
The company’s customers use its AI technology across various production phases:
Creation — from mood boards to iterations and final assortment (design)
Industrialization — from assortment to tech specs and integrations with tools
Content creation — from final product to retail-ready content, including imagery for ecommerce, editorial and selling unsold inventory
And the company has moved into metaverse immersive commerce. Brands can now use Fashable to start creating collections for different digital worlds. “Without AI, the process was slow and labor-intensive,” says Fernandes. A accurate collaborative collection with Wrong Weather, a casual luxury brand demonstrated Fashable’s potential.
Disrupting the fashion status quo
Today, Fashable bills itself as “Deep Tech for the fashion Industry,” “The ChatGPT for AI Images and Content Generation” and “The Most Powerful Generative AI Toolkit for The Metaverse and Physical Fashion.” The broadened value statements underscore the benefits for designers in both worlds: saving money on research, design and content creation, reducing copyright issues, and freeing users to focus on high-value design tasks, explains Fernandes.
“With social media, the metaverse and Web3, the need for new content is exploding for fashion brands,” Fernandes says. “The war for new content has never been so intense. Only AI can generate very realistic images to solve that need.”
Fashable is convinced of the disruptive power of AI more than ever. Beyond keeping today’s trends out of tomorrow’s trash, he believes personalized and exclusive garment designs are the key to business success in physical and virtual worlds. “Generative AI,” he says, “will completely change the status-quo in the fashion industry.”
Generative AI is helping retailers in other important ways:
Merchandising and product onboarding. Generative AI can generate images, music, fonts, videos, 3D models and more for advertising and marketing. Custom images can show how a product looks in different settings. For ecommerce, generative AI and computer vision can create product descriptions, attributes, meta tags and cataloging based on product images.
Service chatbots and conversational AI. For customers and agents, AI helpers can provide virtual assistance, language translation, order status, search, product recommendations, email and chat answers. Brand avatars are delivering consistent omnichannel customer service whether on a kiosk, mobile app, ecommerce or in the drive-thru. Employees can get answers to FAQs via voice, text, videos, and images in multiple languages.
The right infrastructure is crucial for retail AI
The plentiful opportunities for transformational use of AI also bring new challenges to retailers.
As in other industries, many companies will discover they lack the powerful infrastructure needed to develop and deploy AI-enabled applications. Requirements here are typically far more demanding than for conventional computing, especially with large model sizes and high complexity. Optimized, accelerated environments that are “purpose-built” for AI are needed to deliver real-time speed, predictability and accuracy.
Here is what retail experts say is needed for AI success: The versatility to support diverse models with one end-to-end application that can deliver the desired user experience. High performance and scalability to optimize time-to-solution and deployment costs. An end-to-end solution stack that supports the entire workflow — including data prep, model building, training and deployment within an AI-powered service. And a uniform stack to flexibly train in the cloud and deploy at the edge in stores and other locations.
For many, meeting these criteria will mean adding AI to the growing list of critical workloads shifting to the cloud as a way of gaining high-performance processing, servers, networking, storage, development platforms and environments and software without heavy capital costs.
Getting infrastructure right is crucial, agrees Everseen’s O’Herlihy. He says a high-performing cloud environment has been crucial for his company’s AI success in several ways. It enables scaling across thousands of locations, easy “lifting and shifting” of technology building-blocks from one part of a store to another and delivers high performance that lets AI models understand a moving scene, including analyzing how humans interact with objects, in real time.
On-demand services speed innovation
Fashable’s Fernandes concurs. “We talk a lot about on-demand cloud infrastructure and services to accelerate product innovation and build competitive advantage. ‘Doing more with less’ with a small team is important, so we can invest in our IP and leverage Azure Machine Learning and PyTorch with the NVIDIA AI platform to achieve state-of-the-art results. This is crucial for minimizing upfront investments and risks so that our applied research team can fail and adapt quicker.”
A related important consideration, he says, is the data used for AI innovation. Fashable opted to build its own datasets with internal tools to get “complete control” of its innovation. But, Fernandes acknowledges, “the risks and cost can be prohibitive for building AI innovations, so business leaders can minimize those by using existing environments and tools and not ‘reinventing the wheel.’”
Laggards risk sharing in $1.7 trillion of new industry value
Today’s retailers live at the intersection of commerce, consumers and technology transformation. Some 87% are planning to increase investments in AI/ML in 2023. Yet researchers say many retailers have not gotten started with smart technologies. Laggards risk missing out on the $1.7 trillion in business value, roughly 12% of all sales, that McKinsey estimates AI and analytics could generate for the retail industry.
For many economists and forecasters, retail is an important canary in the coal mine. Sales and investments are seen as key indicators not just for the sector, but for the overall economy. If so, retailing’s surging embrace and emerging success using AI for innovation and transformation amidst multiple headwinds bode well far beyond store walls.
Go Deeper
VB Lab Insights content is created in collaboration with a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.Wed, 24 May 2023 08:20:00 -0500en-UStext/htmlhttps://venturebeat.com/ai/ai-in-retail-smarter-stores-smarter-product-design/