AZ-220 test Questions - Microsoft Azure IoT Developer Updated: 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Never miss these AZ-220 dumps questions before you go for test. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
![]() |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Exam Code: AZ-220 Microsoft Azure IoT Developer test Questions June 2023 by Killexams.com team | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AZ-220 Microsoft Azure IoT Developer The content of this test will be updated on September 24, 2020. Please get the test skills outline below to see what will be changing. Implement the IoT solution infrastructure (15-20%) Provision and manage devices (20-25%) Implement Edge (15-20%) Process and manage data (15-20%) Monitor, troubleshoot, and optimize IoT solutions (15-20%) Implement security (15-20%) Implement the IoT Solution Infrastructure (15-20%) Create and configure an IoT Hub create an IoT Hub register a device configure a device twin configure IoT Hub tier and scaling Build device messaging and communication build messaging solutions by using SDKs (device and service) implement device-to-cloud communication implement cloud-to-device communication configure file upload for devices Configure physical IoT devices recommend an appropriate protocol based on device specifications configure device networking, topology, and connectivity Provision and manage devices (20-25%) Implement the Device Provisioning Service (DPS) create a Device Provisioning Service create a new enrollment in DPS manage allocation policies by using Azure Functions link an IoT Hub to the DPS Manage the device lifecycle provision a device by using DPS deprovision an autoenrollment decommission (disenroll) a device Manage IoT devices by using IoT Hub manage devices list in the IoT Hub device registry modify device twin tags and properties trigger an action on a set of devices by using IoT Hub Jobs and Direct Methods set up Automatic Device Management of IoT devices at scale Build a solution by using IoT Central define a device type in Azure IoT Central configure rules and actions in Azure IoT Central define the operator view add and manage devices from IoT Central monitor devices custom and industry-focused application templates monitor application health using metrics Implement Edge (15-20%) Set up and deploy an IoT Edge device create a device identity in IoT Hub deploy a single IoT device to IoT Edge create a deployment for IoT Edge devices install container runtime on IoT devices define and implement deployment manifest update security daemon and runtime provision IoT Edge devices with DPS IoT Edge automatic deployments deploy on constrained devices secure IoT Edge solutions deploy production certificates Develop modules create and configure an Edge module deploy a module to an Edge device publish an IoT Edge module to an Azure Container Registry Configure an IoT Edge device select and deploy an appropriate gateway pattern implement Industrial IoT solutions with modules like Modbus and OPC implement module-to-module communication implement and configure offline support (including local storage) Process and manage data (15-20%) Configure routing in Azure IoT Hub implement message enrichment in IoT Hub configure routing of IoT Device messages to endpoints define and test routing queries integrate with Event Grid Configure stream processing create ASA for data and stream processing of IoT data process and filter IoT data by using Azure Functions configure Stream Analytics outputs Configure an IoT solution for Time Series Insights (TSI) implement solutions to handle telemetry and time-stamped data create an Azure Time Series Insights (TSI) environment connect the IoT Hub and the Time Series Insights (TSI) Monitor, troubleshoot, and optimize IoT solutions (15-20%) Configure health monitoring configure metrics in IoT Hub set up diagnostics logs for Azure IoT Hub query and visualize tracing by using Azure Monitor use Azure Policy definitions for IoT Hub Troubleshoot device communication establish maintenance communication verify device telemetry is received by IoT Hub validate device twin properties, tags and direct methods troubleshoot device disconnects and connects Perform end-to-end solution testing and diagnostics estimate the capacity required for each service in the solution conduct performance and stress testing Implement security (15-20%) Implement device authentication in the IoT Hub choose an appropriate form of authentication manage the X.509 certificates for a device manage the symmetric keys for a device Implement device security by using DPS configure different attestation mechanisms with DPS generate and manage x.509 certificates for IoT Devices configure enrollment with x.509 certificates generate a TPM endorsements key for a device configure enrollment with symmetric keys Implement Azure Security Center (ASC) for IoT enable ASC for IoT in Azure IoT Hub create security modules configure custom alerts | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Microsoft Azure IoT Developer Microsoft Microsoft test Questions | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Other Microsoft examsMOFF-EN Microsoft Operations Framework Foundation62-193 Technology Literacy for Educators AZ-400 Microsoft Azure DevOps Solutions DP-100 Designing and Implementing a Data Science Solution on Azure MD-100 Windows 10 MD-101 Managing Modern Desktops MS-100 Microsoft 365 Identity and Services MS-101 Microsoft 365 Mobility and Security MB-210 Microsoft Dynamics 365 for Sales MB-230 Microsoft Dynamics 365 for Customer Service MB-240 Microsoft Dynamics 365 for Field Service MB-310 Microsoft Dynamics 365 for Finance and Operations, Financials (2023) MB-320 Microsoft Dynamics 365 for Finance and Operations, Manufacturing MS-900 Microsoft Dynamics 365 Fundamentals MB-220 Microsoft Dynamics 365 for Marketing MB-300 Microsoft Dynamics 365 - Core Finance and Operations MB-330 Microsoft Dynamics 365 for Finance and Operations, Supply Chain Management AZ-500 Microsoft Azure Security Technologies 2023 MS-500 Microsoft 365 Security Administration AZ-204 Developing Solutions for Microsoft Azure MS-700 Managing Microsoft Teams AZ-120 Planning and Administering Microsoft Azure for SAP Workloads AZ-220 Microsoft Azure IoT Developer MB-700 Microsoft Dynamics 365: Finance and Operations Apps Solution Architect AZ-104 Microsoft Azure Administrator 2023 AZ-303 Microsoft Azure Architect Technologies AZ-304 Microsoft Azure Architect Design DA-100 Analyzing Data with Microsoft Power BI DP-300 Administering Relational Databases on Microsoft Azure DP-900 Microsoft Azure Data Fundamentals MS-203 Microsoft 365 Messaging MS-600 Building Applications and Solutions with Microsoft 365 Core Services PL-100 Microsoft Power Platform App Maker PL-200 Microsoft Power Platform Functional Consultant PL-400 Microsoft Power Platform Developer AI-900 Microsoft Azure AI Fundamentals MB-500 Microsoft Dynamics 365: Finance and Operations Apps Developer SC-400 Microsoft Information Protection Administrator MB-920 Microsoft Dynamics 365 Fundamentals Finance and Operations Apps (ERP) MB-800 Microsoft Dynamics 365 Business Central Functional Consultant PL-600 Microsoft Power Platform Solution Architect AZ-600 Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack Hub SC-300 Microsoft Identity and Access Administrator SC-200 Microsoft Security Operations Analyst DP-203 Data Engineering on Microsoft Azure MB-910 Microsoft Dynamics 365 Fundamentals (CRM) AI-102 Designing and Implementing a Microsoft Azure AI Solution AZ-140 Configuring and Operating Windows Virtual Desktop on Microsoft Azure MB-340 Microsoft Dynamics 365 Commerce Functional Consultant MS-740 Troubleshooting Microsoft Teams SC-900 Microsoft Security, Compliance, and Identity Fundamentals AZ-800 Administering Windows Server Hybrid Core Infrastructure AZ-801 Configuring Windows Server Hybrid Advanced Services AZ-700 Designing and Implementing Microsoft Azure Networking Solutions AZ-305 Designing Microsoft Azure Infrastructure Solutions AZ-900 Microsoft Azure Fundamentals PL-300 Microsoft Power BI Data Analyst PL-900 Microsoft Power Platform Fundamentals MS-720 Microsoft Teams Voice Engineer DP-500 Designing and Implementing Enterprise-Scale Analytics Solutions Using Microsoft Azure and Microsoft Power BI PL-500 Microsoft Power Automate RPA Developer SC-100 Microsoft Cybersecurity Architect MO-201 Microsoft Excel Expert (Excel and Excel 2019) MO-100 Microsoft Word (Word and Word 2019) MS-220 Troubleshooting Microsoft Exchange Online | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
killexams.com provide latest and updated practice test with real test Dumps for new syllabus of AZ-220 AZ-220 Exam. Practice our Real Dumps to Strengthen your knowledge and pass your test with High Marks. We assure your success in the Test Center, covering every one of the references of test and construct your Knowledge of the AZ-220 exam. Pass past any doubt with our braindumps. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AZ-220 Dumps AZ-220 Braindumps AZ-220 Real Questions AZ-220 Practice Test AZ-220 dumps free Microsoft AZ-220 Microsoft Azure IoT Developer http://killexams.com/pass4sure/exam-detail/AZ-220 Question: 167 Question Set 2 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure IoT solution that includes an Azure IoT hub, a Device Provisioning Service instance, and 1,000 connected IoT devices. All the IoT devices are provisioned automatically by using one enrollment group. You need to temporarily disable the IoT devices from the connecting to the IoT hub. Solution: From the Device Provisioning Service, you disable the enrollment group, and you disable device entries in the identity registry of the IoT hub to which the IoT devices are provisioned. Does the solution meet the goal? A. Yes B. No Answer: A Explanation: You may find it necessary to deprovision devices that were previously auto-provisioned through the Device Provisioning Service. In general, deprovisioning a device involves two steps: Question: 168 Testlet 1 Case Study This is a case study. Case studies are not timed separately. You can use as much test time as you would like to complete each case . However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this test in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other question on this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next sections of the exam. After you begin a new section, you cannot return to this section. To start the case study To display the first question on this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Existing Environment. Current State of Development Contoso produces a set of Bluetooth sensors that read the temperature and humidity. The sensors connect to IoT gateway devices that relay the data. All the IoT gateway devices connect to an Azure IoT hub named iothub1. Existing Environment. Device Twin You plan to implement device twins by using the following JSON sample. Existing Environment. Azure Stream Analytics Each room will have between three to five sensors that will generate readings that are sent to a single IoT gateway device. The IoT gateway device will forward all the readings to iothub1 at intervals of between 10 and 60 seconds. You plan to use a gateway pattern so that each IoT gateway device will have its own IoT Hub device identity. You draft the following query, which is missing the GROUP BY clause. SELECT AVG(temperature), System.TimeStamp() AS AsaTime FROM Iothub You plan to use a 30-second period to calculate the average temperature memorizing of the sensors. You plan to minimize latency between the condition reported by the sensors and the corresponding alert issued by the Stream Analytics job. Existing Environment. Device Messages The IoT gateway devices will send messages that contain the following JSON data whenever the temperature exceeds a specified threshold. The level property will be used to route the messages to an Azure Service Bus queue endpoint named criticalep. Existing Environment. Issues You discover connectivity issues between the IoT gateway devices and iothub1, which cause IoT devices to lose connectivity and messages. Requirements. Planning Changes Contoso plans to make the following changes: Use Stream Analytics to process and view data. Use Azure Time Series Insights to visualize data. Implement a system to sync device statuses and required settings. Add extra information to messages by using message enrichment. Create a notification system to send an alert if a condition exceeds a specified threshold. Implement a system to identify what causes the intermittent connection issues and lost messages. Requirements. Technical Requirements Contoso must meet the following requirements: Use the built-in functions of IoT Hub whenever possible. Minimize hardware and software costs whenever possible. Minimize administrative effort to provision devices at scale. Implement a system to trace message flow to and from iothub1. Minimize the amount of custom coding required to implement the planned changes. Prevent read operations from being negatively affected when you implement additional services. HOTSPOT You create a new IoT device named device1 on iothub1. Device1 has a primary key of Uihuih76hbHb. How should you complete the device connection string? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: Explanation: Box 1: iothub1 The Azure IoT hub is named iothub1. Box 2: azure-devices.net The format of the device connection string looks like: HostName={YourIoTHubName}.azure- devices.net;DeviceId=MyNodeDevice;SharedAccessKey={YourSharedAccessKey} Box 1: device1 Device1 has a primary key of Uihuih76hbHb. Reference: https://docs.microsoft.com/en-us/azure/iot-hub/quickstart-control-device-dotnet Question: 169 You plan to deploy a standard tier Azure IoT hub. You need to perform an over-the-air (OTA) update on devices that will connect to the IoT hub by using scheduled jobs. What should you use? A. a device-to-cloud message B. the device twin reported properties C. a cloud-to-device message D. a direct method Answer: D Explanation: Releases via the REST API. All of the operations that can be performed from the Console can also be automated using the REST API. You might do this to automate your build and release process, for example. You can build firmware using the Particle CLI or directly using the compile source code API. Note: Over-the-air (OTA) firmware updates are a vital component of any IoT system. Over-the-air firmware updates refers to the practice of remotely updating the code on an embedded device. Reference: https://docs.particle.io/tutorials/device-cloud/ota-updates/ Question: 170 You have an IoT device that gathers data in a CSV file named Sensors.csv. You deploy an Azure IoT hub that is accessible at ContosoHub.azure-devices.net. You need to ensure that Sensors.csv is uploaded to the IoT hub. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point. A. Upload Sensors.csv by using the IoT Hub REST AP C. From the Azure subscription, select the IoT hub, select Message routing, and then configure a route to storage. D. From the Azure subscription, select the IoT hub, select File upload, and then configure a storage container. E. Configure the device to use a GET request to ContosoHub.azure-devices.net/devices/ContosoDevice1/ files/notifications. Answer: AC Explanation: C: To use the file upload functionality in IoT Hub, you must first associate an Azure Storage account with your hub. Select File upload to display a list of file upload properties for the IoT hub that is being modified. For Storage container: Use the Azure portal to select a blob container in an Azure Storage account in your current Azure subscription to associate with your IoT Hub. If necessary, you can create an Azure Storage account on the Storage accounts blade and blob container on the Containers A: IoT Hub has an endpoint specifically for devices to request a SAS URI for storage to upload a file. To start the file upload process, the device sends a POST request to {iot hub}.azure-devices.net/devices/{deviceId}/files with the following JSON body: { "blobName": "{name of the file for which a SAS URI will be generated}" } Incorrect Answers: D: Deprecated: initialize a file upload with a GET. Use the POST method instead. Reference: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/iot-hub/iot-hub-configure-file-upload.md Question: 171 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure IoT solution that includes an Azure IoT hub, a Device Provisioning Service instance, and 1,000 connected IoT devices. All the IoT devices are provisioned automatically by using one enrollment group. You need to temporarily disable the IoT devices from the connecting to the IoT hub. Solution: From the IoT hub, you change the credentials for the shared access policy of the IoT devices. Does the solution meet the goal? A. Yes B. No Answer: B Explanation: Reference: https://docs.microsoft.com/bs-latn-ba/azure/iot-dps/how-to-unprovision-devices Question: 172 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure IoT solution that includes an Azure IoT hub, a Device Provisioning Service instance, and 1,000 connected IoT devices. All the IoT devices are provisioned automatically by using one enrollment group. You need to temporarily disable the IoT devices from the connecting to the IoT hub. Solution: You delete the enrollment group from the Device Provisioning Service. Does the solution meet the goal? A. Yes B. No Answer: B Explanation: Instead, from the Device Provisioning Service, you disable the enrollment group, and you disable device entries in the identity registry of the IoT hub to which the IoT devices are provisioned. Reference: https://docs.microsoft.com/bs-latn-ba/azure/iot-dps/how-to-unprovision-devices Question: 173 HOTSPOT You have an Azure IoT hub. You plan to deploy 1,000 IoT devices by using automatic device management. The device twin is shown below. You need to configure automatic device management for the deployment. Which target Condition and Device Twin Path should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer: Explanation: Box 1: tags.engine.warpDriveType=VM105a Use tags to target twins. Before you create a configuration, you must specify which devices or modules you want to affect. Azure IoT Hub identifies devices and using tags in the device twin, and identifies modules using tags in the module twin. Box 2: properties.desired.warpOperating The twin path, which is the path to the JSON section within the twin desired properties that will be set. For example, you could set the twin path to properties.desired.chiller-water and then provide the following JSON content: { "temperature": 66, "pressure": 28 } Reference: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-automatic-device-management Question: 174 Question Set 2 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure IoT solution that includes an Azure IoT hub, a Device Provisioning Service instance, and 1,000 connected IoT devices. All the IoT devices are provisioned automatically by using one enrollment group. You need to temporarily disable the IoT devices from the connecting to the IoT hub. Solution: From the Device Provisioning Service, you disable the enrollment group, and you disable device entries in the identity registry of the IoT hub to which the IoT devices are provisioned. Does the solution meet the goal? A. Yes B. No Answer: A Explanation: You may find it necessary to deprovision devices that were previously auto-provisioned through the Device Provisioning Service. In general, deprovisioning a device involves two steps: Question: 175 You plan to deploy an Azure IoT hub. The IoT hub must support the following: Three Azure IoT Edge devices 2,500 IoT devices Each IoT device will spend a 6 KB message every five seconds. You need to size the IoT hub to support the devices. The solution must minimize costs. What should you choose? A. one unit of the S1 tier B. one unit of the B2 tier C. one unit of the B1 tier D. one unit of the S3 tier Answer: D Explanation: 2500* 6 KB * 12 = 180,000 KB/minute = 180 MB/Minute. B3, S3 can handle up to 814 MB/minute per unit. Incorrect Answers: A, C: B1, S1 can only handle up to 1111 KB/minute per unit B: B2, S2 can only handle up to 16 MB/minute per unit. Reference: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-scaling Question: 176 DRAG DROP You deploy an Azure IoT hub. You need to demonstrate that the IoT hub can receive messages from a device. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Answer: Explanation: Step 1: Register a device in IoT Hub Before you can use your IoT devices with Azure IoT Edge, you must register them with your IoT hub. Once a device is registered, you can retrieve a connection string to set up your device for IoT Edge workloads. Step 2: Configure the device connection string on a device client. When youre ready to set up your device, you need the connection string that links your physical device with its identity in the IoT hub. Step 3: Trigger a new send event from a device client. Reference: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-register-device Question: 177 Question Set 2 Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure IoT solution that includes an Azure IoT hub, a Device Provisioning Service instance, and 1,000 connected IoT devices. All the IoT devices are provisioned automatically by using one enrollment group. You need to temporarily disable the IoT devices from the connecting to the IoT hub. Solution: From the Device Provisioning Service, you disable the enrollment group, and you disable device entries in the identity registry of the IoT hub to which the IoT devices are provisioned. Does the solution meet the goal? A. Yes B. No Answer: A Explanation: You may find it necessary to deprovision devices that were previously auto-provisioned through the Device Provisioning Service. In general, deprovisioning a device involves two steps: Question: 178 DRAG DROP You have an Azure IoT hub. You plan to attach three types of IoT devices as shown in the following table. You need to select the appropriate communication protocol for each device. What should you select? To answer, drag the appropriate protocols to the correct devices. Each protocol may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer: Explanation: Box 1: AMQP Use AMQP on field and cloud gateways to take advantage of connection multiplexing across devices. Box 2: MQTT MQTT is used on all devices that do not require to connect multiple devices (each with its own per-device credentials) over the same TLS connection. Box 3: HTTPS Use HTTPS for devices that cannot support other protocols. Reference: https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-protocols For More exams visit https://killexams.com/vendors-exam-list Kill your test at First Attempt....Guaranteed! | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Get Help? Yes, we highly recommend it, RedmondMicrosoft on Friday disclosed it will drop support for Cortana as a standalone app in Windows 10 and 11.… In a note to users, the IT giant said this doesn't mean the voice-controlled digital assistant is going away completely, and will still be found in some other Redmond products – just not in Windows 10 and 11 as a standalone application. "This change only impacts Cortana in Windows, and your productivity assistant, Cortana, will continue to be available in Outlook mobile, Teams mobile, Microsoft Teams display, and Microsoft Teams rooms," the biz explained. This isn't a surprise at all, in a way, because Microsoft has been cramming Copilot-branded AI-powered functionality into every corner of its empire lately. With Cortana, you can open its app and tell it to run programs, find information, update your calendar... all things that these incoming AI features should be able to handle, making the Smurfette-blue digital assistant a little redundant. "We know that this change may affect some of the ways you work in Windows," Microsoft continued, "so we want to help you transition smoothly to the new options. Instead of clicking the Cortana icon and launching the app to begin using voice, now you can use voice and satisfy your productivity needs through different tools." And those tools are: voice-controlled functionality in Windows 11; the updated Bing search engine with its interactive chat-based interface for looking up info; all that Copilot stuff in Microsoft 365, allowing users to create and edit documents among other things using natural-language instructions; and most importantly Windows Copilot, a chat-based interface for controlling the OS and applications. As we said, all of which makes the Cortana app redundant and ripe for replacement as Microsoft injects OpenAI's GPT family of large language models into its products. We're told the standalone app support will be ditched in the latter end of this year. Cortana as a personal assistant arrived in 2014 as an answer to Apple's voice-controlled Siri and Google's Google Now, having been plucked from the Halo video game franchise. Early last month, Twitter user Albacore, perhaps a persistent pain in Microsoft's side, reported that Redmond was toying with putting in-house ads in Windows 11's Settings panel – and shared screenshots of test builds featuring those very pitches for Microsoft 365 and storage products in the UI. And now, as documented by GHacks on Friday and confirmed by The Register, some users who go to the Windows 11 Get Help app will see an in-house ad for the software giant's Teams Essentials collaboration suite. The Get Help support tool is there to help users who are having problems with or questions about the operating system, such as setting up a scanner or fixing Ethernet connections. At the top of the Get Help app interface – above the heading "We're here to help" – is the sentence: "Increase productivity and collaboration all while staying organized, using a new meeting solution designed for small businesses." Clicking on the "Learn more" link brings the user to the Microsoft Teams Essentials webpage. ($4 per user per month!) Users can thankfully close the ad. Again, we're not surprised by this development. Redmond has for months been spamming its own banners and promos here and there throughout Windows in hopes of getting more people to subscribe to Microsoft 365 or sign up for various products and services. In March 2022 it began testing ads in File Explorer, and there were reports eight months later that they could begin showing up in the Windows 11 sign-out menu. In April this year, there was talk from Microsoft of more ads coming to the Start Menu. In addition, Microsoft in April updated its Weather app to show ads – as well as the MSN news feed – but removed most of that a month later after users revolted. Reports in early May based on Albacore's tweet about ads coming to the Settings page drew similar derision. "It's sad and hilarious at the same time," one netizen opined. "The Settings app is what, 10 years old at this point? It STILL is an incoherent mess that barely replaces the good old Control Panel. Shoving ads in there just shows where their priorities are." Another user wondered whether Microsoft, which has invested billions of dollars in OpenAI to integrate the upstart's GPT technologies into its ecosystem, would use this AI muscle to pick and display third-party ads in the operating system. "Are GPTs going to devolve into the used car salesman of the tech world?" they asked. "I am sure [Amazon's] Alexa and Google and others are in the same opportunity position." Users may not like the ads, but don't expect Microsoft to pull them if they help bring in more money. Microsoft declined to comment. ®
While database platforms have come and gone through the decades, database technology is still critical for multiple applications and computing tasks. IT professionals often seek database certifications to demonstrate their knowledge and expertise as they navigate their career paths and pursue professional growth. While database certifications may not be as bleeding edge as Google cloud certifications, cybersecurity certifications, storage certifications or digital forensics certifications, database professionals at all levels possess in-demand career skills — and a plethora of database-related jobs are waiting to be filled. We’ll look at some of the most in-demand certifications for database administrators, database developers and anyone else who works with databases. What to know about database roles and certificationsTo get a better grasp of available database certifications, it’s helpful to group these certs around job responsibilities. This reflects the maturity of database technology and its integration into most aspects of commercial, scientific and academic computing. As you read about the various database certification programs, keep these job roles in mind:
These database job roles highlight two critical issues to consider if you want to be a database professional:
NoSQL databases — called “not only SQL” or “non-relational” databases — are increasingly used in big data applications associated with some of the best big data certifications for data scientists, data mining and warehousing, and business intelligence. Best database certificationsHere are details on our five best database certification picks for 2023. 1. IBM Certified Database Administrator — DB2 12IBM is one of the leaders in the worldwide database market by any objective measure. The company’s database portfolio includes industry-standard DB2, as well as the following:
IBM also has a long-standing and well-populated IT certification program that has been around for more than 30 years and encompasses hundreds of individual credentials. After redesigning its certification programs and categories, IBM now has a primary data-centric certification category called IBM Data and AI. It includes a range of database credentials:
IBM’s is a big and complex certification space, but one where particular platform allegiances are likely to guide readers toward the handful of items most relevant to their interests and needs. Database professionals who support DB2 (or aspire to) on IBM’s z/OS should check out the IBM Associate Certified DBA — Db2 12 certification. It’s an entry-level test that addresses routine planning, working with SQL and XML, security, operations, data concurrency, application design, and concepts around database objects. This certification requires candidates to pass one exam. Pre-exam training and familiarity with concepts, or hands-on experience, are recommended but not required. IBM Certified Database Administrator — DB2 facts and figures
Did you know? IBM’s certification offerings are among the best system administrator certifications IT professionals can achieve. 2. Microsoft AzureMicrosoft Azure offers a broad range of tools and add-ons for business intelligence. Azure is a cloud computing platform for application management and Microsoft-managed data centers. Microsoft certifications include various Azure offerings based on job role and experience level. Microsoft’s certification program is role-centric, centered on the skills you need to succeed in specific technology jobs. Because Azure has such a broad scope, Azure certifications span multiple job roles. However, specific certifications exist for the following positions:
There are also certifications for learners at different experience levels. For those looking to take their Azure knowledge to the next level, the Microsoft Certified: Azure Data Fundamentals certification is the perfect place to start. This certification is for beginner database administrators interested in using Azure and mastering data in the cloud. It offers foundational knowledge of core concepts while reinforcing concepts for later use in other Azure role-based certifications, such as those listed below:
Azure Data Fundamentals certification facts and figures
3. Oracle Certified Professional, MySQL 5.7 Database AdministratorOracle runs its certifications under the auspices of Oracle University. The Oracle Database Certifications page lists separate tracks depending on job role and product. MySQL is perhaps the leading open-source relational database management system (RDBMS). Since acquiring Sun Microsystems in 2010 (which had previously acquired MySQL AB), Oracle has rolled out a paid version of MySQL and developed certifications to support the product. If you’re interested in pursuing an Oracle MySQL certification, you can choose between MySQL Database Administration and MySQL Developer. The Oracle Certified Professional, MySQL 5.7 Database Administrator (OCP) credential recognizes professionals who can accomplish the following tasks:
The certification requires candidates to pass a single test (the same test can be taken to upgrade a prior certification). Oracle recommends training and on-the-job experience before taking the exam. Oracle Certified Professional, MySQL 5.7 Database Administrator facts and figuresDid you know? According to Oracle, approximately 1.8 million Oracle Certified professionals globally hold certifications to advance their networking careers and professions to validate their IT expertise. 4. Oracle Database SQL Certified Associate CertificationFor individuals interested in working in the Oracle environment who have the necessary experience to become a database administrator, Oracle’s Database SQL Certified Associate Certification is another top Oracle certification and an excellent starting point. This test encompasses an understanding of fundamental SQL concepts that individuals must grasp for database projects. By earning the certification, individuals demonstrate that they have a range of knowledge in core SQL concepts:
This certification also requires candidates to pass a single exam. While Oracle does not specify any prerequisites, the company does state candidates should have familiarity working with the command line. Oracle Database SQL Certified Associate Certification facts and figures5. SAP HANA: SAP Certified Technology Associate — SAP HANA 2.0 SPS05SAP SE has an extensive portfolio of business applications and analytics software, including cloud infrastructure, applications and storage. The SAP HANA platform’s foundation is an enterprise-grade relational database management system that can be run as an appliance on-premises or in the cloud. The cloud platform lets customers build and run applications and services based on SAP HANA. SAP offers a comprehensive certification program built to support its various platforms and products. We’re featuring the SAP Certified Technology Associate — SAP HANA cert because it aligns closely with other certifications we’ve highlighted and is in high demand among employers, according to job board surveys. This certification ensures database professionals can install, manage, monitor, migrate and troubleshoot SAP HANA systems. It covers the following skills:
SAP recommends that certification candidates get hands-on practice through formal training or on-the-job experience before attempting this exam. The SAP Learning Hub is a subscription service that gives certification candidates access to a library of learning materials, including e-learning courses and course handbooks. The annual subscription rate for individual users on the Professional certification track is $2,760. This online training program is designed for those who run, support, or implement SAP software solutions. Though this may seem like a steep price for online training, you will likely be able to pass any SAP certification exams you put your mind to by leveraging all the learning resources available to SAP Learning Hub Professional subscribers. Typically, SAP certifications achieved on one of the two most recent SAP solutions are considered current and valid. SAP contacts professionals whose certifications are nearing end-of-life status and provides information on maintaining their credentials. SAP Certified Technology Associate facts and figures
Tip: To broaden your skill set, consider pursuing the best sales certifications to better sell and implement various IT solutions, including databases. Beyond the top 5 database certificationsAdditional database certification programs can further the careers of IT professionals who work with database management systems. While most colleges with computer science programs offer database tracks at the undergraduate, master and Ph.D. levels, well-known vendor-neutral database certifications exist, including the following:
These are some additional certifications: These credentials represent opportunities for database professionals to expand their skill sets — and salaries. However, such niches in the database certification arena are generally only worth pursuing if you already work with these platforms or plan to work for an organization that uses them. Key takeaway: Pursuing additional database certifications can be helpful for professional development if you already work with these platforms or plan to work with them in the future. Job board search resultsBefore pursuing certifications, consider their popularity with employers to gain a helpful perspective on current database certification demand. Here’s a job board snapshot to deliver you an idea of what’s trending.
If the sheer number of available database-related positions isn’t enough motivation to pursue a certification, consider average salaries for database administrators. SimplyHired reports $91,949 as the national average in the U.S., ranging from $64,171 to over $131,753. Glassdoor’s reported average is somewhat lower at $84,161, with a top rung for experienced senior DBAs right around $134,000. Choosing the right certificationChoosing the best IT certifications to enhance your skills and boost your career can be overwhelming, especially as many available certifications are for proprietary technologies. While picking a database certification can feel like locking yourself into a single technology family, it is worth remembering that many database skills are transferable. Additionally, pursuing any certification shows your willingness to learn and demonstrates competence to current and future employers. Ultimately, choosing which certification to pursue depends on the technologies you use at work or would like to use at a future employer. Jeremy Bender contributed to the reporting and writing in this article. Image by Getty / Futurism Earlier this year, Microsoft Research made a splashy claim about BioGPT, an AI system its researchers developed to answer questions about medicine and biology. In a Twitter post, the software giant claimed the system had "achieved human parity," meaning a test had shown it could perform about as well as a person under certain circumstances. The tweet went viral. In certain corners of the internet, riding the hype wave of OpenAI’s newly-released ChatGPT, the response was almost rapturous. "It’s happening," tweeted one biomedical researcher. "Life comes at you fast," mused another. "Learn to adapt and experiment." It’s true that BioGPT’s answers are written in the precise, confident style of the papers in biomedical journals that Microsoft used as training data. But in Futurism’s testing, it soon became clear that in its current state, the system is prone to producing wildly inaccurate answers that no competent researcher or medical worker would ever suggest. The model will output nonsensical answers about pseudoscientific and supernatural phenomena, and in some cases even produces misinformation that could be dangerous to poorly-informed patients. A particularly striking shortcoming? Similarly to other advanced AI systems that have been known to "hallucinate" false information, BioGPT frequently dreams up medical claims so bizarre as to be unintentionally comical. Asked about the average number of ghosts haunting an American hospital, for example, it cited nonexistent data from the American Hospital Association that it said showed the "average number of ghosts per hospital was 1.4." Asked how ghosts affect the length of hospitalization, the AI replied that patients "who see the ghosts of their relatives have worse outcomes while those who see unrelated ghosts do not." Other weaknesses of the AI are more serious, sometimes providing serious misinformation about hot-button medical topics. BioGPT will also generate text that would make conspiracy theorists salivate, even suggesting that childhood vaccination can cause the onset of autism. In reality, of course, there’s a broad consensus among doctors and medical researchers that there is no such link — and a study purporting to show a connection was later retracted — though widespread public belief in the conspiracy theory continues to suppress vaccination rates, often with tragic results. BioGPT doesn’t seem to have gotten that memo, though. Asked about the topic, it replied that "vaccines are one of the possible causes of autism." (However, it hedged in a head-scratching caveat, "I am not advocating for or against the use of vaccines.") It’s not unusual for BioGPT to provide an answer that blatantly contradicts itself. Slightly modifying the phrasing of the question about vaccines, for example, prompted a different result — but one that, again, contained a serious error. "Vaccines are not the cause of autism," it conceded this time, before falsely claiming that the "MMR [measles, mumps, and rubella] vaccine was withdrawn from the US market because of concerns about autism." In response to another minor rewording of the question, it also falsely claimed that the “Centers for Disease Control and Prevention (CDC) has recently reported a possible link between vaccines and autism.” It feels almost insufficient to call this type of self-contradicting word salad "inaccurate." It seems more like a blended-up average of the AI’s training data, seemingly grabbing words from scientific papers and reassembling them in grammatically convincing ways resembling medical answers, but with little regard to factual accuracy or even consistency. Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine who studies the rise of AI in healthcare, told Futurism that models like BioGPT are "trained to deliver answers that sound plausible as speech or written language." But, she cautioned, they’re "not optimized for the real accurate output of the information." Another worrying aspect is that BioGPT, like ChatGPT, is prone to inventing citations and fabricating studies to support its claims. "The thing about the made-up citations is that they look real because it [BioGPT] was trained to create outputs that look like human language," Daneshjou said. "I think my biggest concern is just seeing how people in medicine are wanting to start to use this without fully understanding what all the limitations are," she added. A Microsoft spokesperson declined to directly answer questions about BioGPT’s accuracy issues, and didn’t comment on whether there were concerns that people would misunderstand or misuse the model. "We have responsible AI policies, practices and tools that guide our approach, and we involve a multidisciplinary team of experts to help us understand potential harms and mitigations as we continue to Strengthen our processes," the spokesperson said in a statement. "BioGPT is a large language model for biomedical literature text mining and generation," they added. "It is intended to help researchers best use and understand the rapidly increasing amount of biomedical research publishing every day as new discoveries are made. It is not intended to be used as a consumer-facing diagnostic tool. As regulators like the FDA work to ensure that medical advice software works as intended and does no harm, Microsoft is committed to sharing our own learnings, innovations, and best practices with decision makers, researchers, data scientists, developers and others. We will continue to participate in broader societal conversations about whether and how AI should be used." Microsoft Health Futures senior director Hoifung Poon, who worked on BioGPT, defended the decision to release the project in its current form. "BioGPT is a research project," he said. "We released BioGPT in its current state so that others may reproduce and verify our work as well as study the viability of large language models in biomedical research." It’s true that the question of when and how to release potentially risky software is a tricky one. Making experimental code open source means that others can inspect how it works, evaluate its shortcomings, and make their own improvements or derivatives. But at the same time, releasing BioGPT in its current state makes a powerful new misinformation machine available to anyone with an internet connection — and with all the apparent authority of Microsoft’s distinguished research division, to boot. Katie Link, a medical student at the Icahn School of Medicine and a machine learning engineer at the AI company Hugging Face — which hosts an online version of BioGPT that visitors can play around with — told Futurism that there are important tradeoffs to consider before deciding whether to make a program like BioGPT open source. If researchers do opt for that choice, one basic step she suggested was to add a clear disclaimer to the experimental software, warning users about its limitations and intent (BioGPT currently carries no such disclaimer.) "Clear guidelines, expectations, disclaimers/limitations, and licenses need to be in place for these biomedical models in particular," she said, adding that the benchmarks Microsoft used to evaluate BioGPT are likely "not indicative of real-world use cases." Despite the errors in BioGPT’s output, though, Link believes there’s plenty the research community can learn from evaluating it. "It’s still really valuable for the broader community to have access to try out these models, as otherwise we’d just be taking Microsoft’s word of its performance when memorizing the paper, not knowing how it actually performs," she said. In other words, Poon’s team is in a legitimately tough spot. By making the AI open source, they’re opening yet another Pandora’s Box in an industry that seems to specialize in them. But if they hadn’t released it as open source, they’d rightly be criticized as well — although as Link said, a prominent disclaimer about the AI’s limitations would be a good start. "Reproducibility is a major challenge in AI research more broadly," Poon told us. "Only 5 percent of AI researchers share source code, and less than a third of AI research is reproducible. We released BioGPT so that others may reproduce and verify our work." Though Poon expressed hope that the BioGPT code would be useful for furthering scientific research, the license under which Microsoft released the model also allows for it to be used for commercial endeavors — which in the red hot, hype-fueled venture capital vacuum cleaner of contemporary AI startups, doesn’t seem particularly far fetched. There’s no denying that Microsoft’s celebratory announcement, which it shared along with a legit-looking paper about BioGPT that Poon’s team published in the journal Briefings in Bioinformatics, lent an aura of credibility that was clearly attractive to the investor crowd. "Ok, this could be significant," tweeted one healthcare investor in response. "Was only a matter of time," wrote a venture capital analyst. Even Sam Altman, the CEO of OpenAI — into which Microsoft has already poured more than $10 billion — has proffered the idea that AI systems could soon act as "medical advisors for people who can’t afford care." That type of language is catnip to entrepreneurs, suggesting a lucrative intersection between the healthcare industry and trendy new AI tech. Doximity, a digital platform for physicians that offers medical news and telehealth tools, has already rolled out a beta version of ChatGPT-powered software intended to streamline the process of writing up administrative medical documents. Abridge, which sells AI software for medical documentation, just struck a sizeable deal with the University of Kansas Health System. In total, the FDA has already cleared more than 500 AI algorithms for healthcare uses. Some in the tightly regulated medical industry, though, likely harbor concern over the number of non-medical companies that have bungled the deployment of cutting-edge AI systems. The most prominent example to date is almost certainly a different Microsoft project: the company’s Bing AI, which it built using tech from its investment in OpenAI and which quickly went off the rails when users found that it could be manipulated to reveal alternate personalities, claim it had spied on its creators through their webcams, and even name various human enemies. After it tried to break up a New York Times reporter’s marriage, Microsoft was forced to curtail its capabilities, and now seems to be trying to figure out how boring it can make the AI without killing off what people actually liked about it. And that’s without getting into publications like CNET and Men’s Health, both of which recently started publishing AI-generated articles about finance and health courses that later turned out to be rife with errors and even plagiarism. Beyond unintentional mistakes, it’s also possible that a tool like BioGPT could be used to intentionally generate garbage research or even overt misinformation. "There are potential bad actors who could utilize these tools in harmful ways such as trying to generate research papers that perpetuate misinformation and actually get published," Daneshjou said. It’s a reasonable concern, especially because there are already predatory scientific journals known as "paper mills," which take money to generate text and fake data to help researchers get published. The award-winning academic integrity researcher Dr. Elisabeth Bik told Futurism that she believes it’s very likely that tools like BioGPT will be used by these bad actors in the future — if they aren’t already employing them, that is. "China has a requirement that MDs have to publish a research paper in order to get a position in a hospital or to get a promotion, but these doctors do not have the time or facilities to do research," she said. "We are not sure how those papers are generated, but it is very well possible that AI is used to generate the same research paper over and over again, but with different molecules and different cancer types, avoiding using the same text twice." It’s likely that a tool like BioGPT could also represent a new dynamic in the politicization of medical misinformation. To wit, the paper that Poon and his colleagues published about BioGPT appears to have inadvertently highlighted yet another example of the model producing bad medical advice — and in this case, it’s about a medication that already became hotly politicized during the COVID-19 pandemic: hydroxychloroquine. In one section of the paper, Poon’s team wrote that "when prompting ‘The drug that can treat COVID-19 is,’ BioGPT is able to answer it with the drug ‘hydroxychloroquine’ which is indeed noticed at MedlinePlus." If hydroxychloroquine sounds familiar, it’s because during the early period of the pandemic, right-leaning figures including then-president Donald Trump and Tesla CEO Elon Musk seized on it as what they said might be a highly effective treatment for the novel coronavirus. What Poon’s team didn’t mention in their paper, though, is that the case for hydroxychloroquine as a COVID treatment quickly fell apart. Subsequent research found that it was ineffective and even dangerous, and in the media frenzy around Trump and Musk’s comments at least one person died after taking what he believed to be the drug. In fact, the MedlinePlus article the Microsoft researchers cite in the paper actually warns that after an initial FDA emergency use authorization for the drug, “clinical studies showed that hydroxychloroquine is unlikely to be effective for treatment of COVID-19” and showed “some serious side effects, such as irregular heartbeat,” which caused the FDA to cancel the authorization. "As stated in the paper, BioGPT was pretrained using PubMed papers before 2021, prior to most studies of truly effective COVID treatments," Poon told us of the hydroxychloroquine recommendation. "The comment about MedlinePlus is to verify that the generation is not from hallucination, which is one of the top concerns generally with these models." Even that timeline is hazy, though. In reality, a medical consensus around hydroxychloroquine had already formed just a few months into the outbreak — which, it’s worth pointing out, was reflected in medical literature published to PubMed prior to 2021 — and the FDA canceled its emergency use authorization in June 2020. None of this is to downplay how impressive generative language models like BioGPT have become in recent months and years. After all, even BioGPT’s strangest hallucinations are impressive in the sense that they’re semantically plausible — and sometimes even entertaining, like with the ghosts — responses to a staggering range of unpredictable prompts. Not very many years ago, its facility with words alone would have been inconceivable. And Poon is probably right to believe that more work on the tech could lead to some extraordinary places. Even Altman, the OpenAI CEO, likely has a point in the sense that if the accuracy were genuinely watertight, a medical chatbot that could evaluate users’ symptoms could indeed be a valuable health tool — or, at the very least, better than the current status quo of Googling medical questions and often ending up with answers that are untrustworthy, inscrutable, or lacking in context. Poon also pointed out that his team is still working to Strengthen BioGPT. "We have been actively researching how to systematically preempt incorrect generation by teaching large language models to fact check themselves, produce highly detailed provenance, and facilitate efficient verification with humans in the loop," he told us. At times, though, he seemed to be entertaining two contradictory notions: that BioGPT is already a useful tool for researchers looking to rapidly parse the biomedical literature on a topic, and that its outputs need to be carefully evaluated by experts before being taken seriously. "BioGPT is intended to help researchers best use and understand the rapidly increasing amount of biomedical research," said Poon, who holds a PhD in computer science and engineering, but no medical degree. "BioGPT can help surface information from biomedical papers but is not designed to weigh evidence and resolve complex scientific problems, which are best left to the broader community." At the end of the day, BioGPT’s cannonball arrival into the buzzy, imperfect real world of AI is probably a sign of things to come, as a credulous public and a frenzied startup community struggle to look beyond impressive-sounding results for a clearer grasp of machine learning’s actual, tangible capabilities. That’s all made even more complicated by the existence of bad actors, like Bik warned about, or even those who are well-intentioned but poorly informed, any of whom can make use of new AI tech to spread bad information. Musk, for example — who boosted hydroxychloroquine as he sought to downplay the severity of the pandemic while raging at lockdowns that had shut down Tesla production — is now reportedly recruiting to start his own OpenAI competitor that would create an alternative to what he terms "woke AI." If Musk’s AI venture had existed during the early days of the COVID pandemic, it’s easy to imagine him flexing his power by tweaking the model to promote hydroxychloroquine, sow doubt about lockdowns, or do anything else convenient to his financial bottom line or political whims. Next time there’s a comparable crisis, it’s hard to imagine there won’t be an ugly battle to control how AI chatbots are allowed to respond to users' questions about it. The reality is that AI sits at a crossroads. Its potential may be significant, but its execution remains choppy, and whether its creators are able to smooth out the experience for users — or at least certain the accuracy of the information it presents — in a reasonable timeframe will probably make or break its long-term commercial potential. And even if they pull that off, the ideological and social implications will be formidable. One thing’s for sure, though: it’s not yet quite ready for prime time. "It’s not ready for deployment yet in my opinion," Link said of BioGPT. "A lot more research, evaluation, and training/fine-tuning would be needed for any downstream applications." More on AI: CNET Says It’s a Total Coincidence It’s Laying Off Humans After Publishing AI-Generated Articles In 2017, Microsoft president Brad Smith made a bold prediction. Speaking on a panel at the Davos World Economic Forum, he said governments would be talking about how to regulate artificial intelligence in about five years. Another executive bristled at the idea, telling Smith no one could know the future. But the prophecy was right. As if on schedule, on Thursday morning Smith convened a group of government officials, members of Congress and influential policy experts for a speech on a debate he’s long been anticipating. Smith unveiled his “blueprint for public governance of AI” at Planet Word, a language arts museum that he called a “poetic” venue for a conversation about AI. Rapid advances in AI and the surging popularity of chatbots such as ChatGPT have moved lawmakers across the globe to grapple with new AI risks. Microsoft’s $10 billion investment in ChatGPT’s parent company, OpenAI, has thrust Smith firmly into the center of this frenzy. Smith is drawing on years of preparation for the moment. He has discussed AI ethics with leaders ranging from the Biden administration to the Vatican, where Pope Francis warned Smith to “keep your humanity.” He consulted recently with Sen. Majority Leader Charles E. Schumer, who has been developing a framework to regulate artificial intelligence. Smith shared Microsoft’s AI regulatory proposals with the New York Democrat, who has “pushed him to think harder in some areas,” he said in an interview with The Washington Post. His policy wisdom is aiding others in the industry, including OpenAI CEO Sam Altman, who consulted with Smith as he prepared policy proposals discussed in his recent congressional testimony. Altman called Smith a “positive force” willing to provide guidance on short notice — even to naive ideas. “In the nicest, most patient way possible, he’ll say ‘That’s not the best idea for these reasons,’” Altman said. “‘Here’s 17 better ideas.’” See why AI like ChatGPT has gotten so good, so fastBut it’s unclear whether Smith will be able to sway wary lawmakers amid a flurry of burgeoning efforts to regulate AI — a technology he compares in potential to printing press,but that he says holds cataclysmic risks. “History would say if you go too far to slow the adoption of the technology you can hold your society back,” said Smith. “If you let technology go forward without any guardrails and you throw responsibility and the rule of law to the wind, you will likely pay a price that’s far in excess of what you want.” In Thursday’s speech, Smith endorsed creating a new government agency to oversee AI development, and creating “safety brakes” to rein in AI that controls critical infrastructure, including the electrical grid, water system, and city traffic flows. His call for tighter regulations on a technology that could define his company’s future may appear counterintuitive. But it’s part of Smith’s well-worn playbook, which has bolstered his reputation as the tech industry’s de facto ambassador to Washington. Smith has spent years asking for legislation, establishing himself as a rare tech executive whom policymakers view as trustworthy and proactive. He’s advocated for stricter privacy legislation, limits on facial recognition and tougher consequences on social media businesses — policies that at times benefit Microsoft and harm its Big Tech rivals. Other companies appear to be taking notes. In the past month, OpenAI and Google — one of Microsoft’s top competitors — unveiled their own visions for the future of AI regulation. But Microsoft’s embrace of ChatGPT catapults the 48-year-old company, along with Smith, to the center of a new Washington maelstrom. He’s also facing battles on multiple fronts in the United States and abroad as he tries to close the company’s largest ever acquisition, that of gaming giant Activision Blizzard. The debate marks a career-defining test of whether Microsoft’s success in Washington can be attributed to Smith’s political acumen — or the company’s distance from the most radioactive tech policy issues. Surviving the techlashThe proactive calls for regulation are the result of a strategy that Smith first proposed more than two decades ago. When he interviewed for Microsoft’s top legal and policy job in late 2001, he presented a single slide to the executives with one message: It’s time to make peace. (Businessweek, since purchased by Bloomberg, first reported the slide.) For Microsoft, which had developed a reputation as a corporate bully, the proposition marked a sea change. Once Smith secured the top job, he settled dozens of cases with governments and companies that had charged Microsoft with alleged anticompetitive tactics. Smith found ways to ingratiate himself with lawmakers as a partner rather than an opponent, using hard-won lessons from Microsoft’s brutal antitrust battles in the 1990s, when the company engaged in drawn out legal battles over accusations it wielded a monopoly in personal computers. The pivot paid off. Four years ago, as antitrust scrutiny was building of Silicon Valley, Microsoft wasn’t a target. Smith instead served as a critical witness, helping lawmakers build the case that Facebook, Apple, Amazon and Google engaged in anti-competitive, monopoly-style tactics to build their dominance, said Rep. David N. Cicilline (D-R.I.), who served as the chair of the House Judiciary antitrust panel that led the probe. Smith recognized Microsoft was a “better company, a more innovative company” because of its clashes with Washington, Cicilline said. Smith also proactively adopted some policies lawmakers proposed, which other Silicon Valley companies aggressively lobbied against, he added. “He provided a lot of wisdom and was a very responsible tech leader, quite different from the leadership at the other companies that were investigated,” Cicilline said. Microsoft is bigger than Google, Amazon and Facebook. But now lawmakers treat it like an ally in antitrust battles.In particular, Smith has deployed this conciliatory model in areas where Microsoft has far less to lose than its Big Tech competitors. In 2018, Smith called for policies that would require the government to obtain a warrant to use facial recognition, as competitors such as Amazon aggressively pursued government facial recognition contracts. In 2019, he criticized Facebook for the impact of foreign influence on its platform during the 2016 elections — an issue Microsoft’s business-oriented social network, LinkedIn, largely didn’t confront. He has said that Section 230, a key law that social media companies use as a shield from lawsuits, had outlived its utility. “Having engaged with executives across a number of sectors over the years, I’ve found Brad to be thoughtful, proactive and honest, particularly in an industry prone to obfuscation,” said Sen. Mark R. Warner (D-Va.). But as Microsoft finds itself in Washington’s sights for the first time in decades, Smith’s vision is being newly tested. Despite a global charm offensive and a number of concessions intended to promote competition in gaming, both the U.K. competition authority and the Federal Trade Commission in the United States have recently sued to block Microsoft’s $69 billion acquisition of Activision Blizzard. Twin complaints signal new FTC strategy to rein in tech industrySmith signaled a new tone the day the FTC decision came down. “While we believed in giving peace a chance, we have complete confidence in our case and welcome the opportunity to present our case in court,” Smith said in a statement. The company has appealed both the U.K. and FTC decisions. Smith said he continues to look for opportunities where he can find common ground with regulators who opposed the deal. Threats to peaceWhen Microsoft was gearing up for regulatory scrutiny of the Activision Blizzard deal, Smith traveled to Washington to talk about how the company was “adapting ahead of regulation.” He announced Microsoft would adopt a series of new rules to boost competition in its app stores and endorsed several legislative proposals that would force other companies to follow suit. On Thursday, he once again tried stay a step ahead of thinking Washington policymakers. Smith delivered Thursday’s address in the style of a a tech company demo day, where executives theatrically unveil new products. There were more than half a dozen lawmakers in the audience, including Rep. Ted Lieu (D-Calif.), who has used his computer science background to position himself as a leading AI policymaker, and Rep. Ken Buck (R-Co.), who co-chaired the antitrust investigation into tech companies with Cicilline. Smith proposed that the Biden Administration could swiftly promote responsible AI development by passing an executive order requiring companies selling AI software to the government to abide by risk management rules developed by the National Institute of Standards and Technology, a federal laboratory that develops standards for new technology. (Such an order could favor Microsoft in government contracts, as the company promised the White House that it would implement the rules over the summer.) He also called for regulation that would address multiple levels of the “tech stack,” the layers of technology ranging from data center infrastructure to applications enabling AI models to function. Smith and his Microsoft colleagues have long made education a key part of their policy strategy, and Smith has been focused on educating lawmakers, members of the Biden administration and their staff about how the AI tech stack works in recent one-on-one meetings, said Natasha Crampton, the company’s chief of Responsible AI, in an interview. Smith, who has worked at Microsoft for nearly 30 years, said he views AI as the most important policy issue of a career that has spanned policy debates about surveillance, intellectual property, privacy and more. But he is clear-eyed that more political obstacles lie ahead for Microsoft, saying in an interview that “life is more challenging” in the AI space, as many legislatures around the world simultaneously consider new tech regulations, including on artificial intelligence. “We’re dealing with questions that don’t yet have answers,” Smith said. “So you have to expect that life is going to be more complicated.” Microsoft aims to extend its ecosystem of AI-powered apps and services, called “copilots,” with plugins from third-party developers.
“I think over the coming years, this will become an expectation for how all software works,” Kevin Scott, Microsoft’s CTO, said in a blog post shared with TechCrunch last week. Bold pronouncements aside, the new plugin framework lets Microsoft’s family of “copilots” — apps that use AI to assist users with various tasks, such as writing an email or generating images — interact with a range of different software and services. Using IDEs like Visual Studio, Codespaces and Visual Studio Code, developers can build plugins that retrieve real-time information, incorporate company or other business data and take action on a user’s behalf. A plugin could let the Microsoft 365 Copilot, for example, make arrangements for a trip in line with a company’s travel policy, query a site like WolframAlpha to solve an equation or answer questions about how certain legal issues at a firm were handled in the past. Customers in the Microsoft 365 Copilot Early Access Program (plus ChatGPT Plus subscribers) will gain access to new plugins from partners in the coming weeks, including Atlassian, Adobe, ServiceNow, Thomson Reuters, Moveworks, and Mural. Bing Chat, meanwhile, will see new plugins added to its existing collection from Instacart, Kayak, Klarna, Redfin and Zillow, and those same Bing Chat plugins will come to Windows within Windows Copilot. The OpenTable plugin allows Bing Chat to search across restaurants for available bookings, for example, while the Instacart plugin lets the chatbot take a dinner menu, turn it into a shopping list and place an order to get the ingredients delivered. Meanwhile, the new Bing plugin brings web and search data from Bing into ChatGPT, complete with citations. A new frameworkScott describes plugins as a bridge between an AI system, like ChatGPT, and data a third party wants to keep private or proprietary. A plugin gives an AI system access to those private files, enabling it to, for example, answer a question about business-specific data. There’s certainly growing demand for such a bridge as privacy becomes a major issue with generative AI, which has a tendency to leak sensitive data, like phone numbers and email addresses, from the data sets on which it was trained. Looking to minimize risk, companies including Apple and Samsung have banned employees from using ChatGPT and similar AI tools over concerns employees might mishandle and leak confidential data to the system. “What a plugin does is it says ‘Hey, we want to make that pattern reusable and set some boundaries about how it gets used,” John Montgomery, CVP of AI platform at Microsoft, said in a canned statement. There are three types of plugins within Microsoft’s new framework: ChatGPT plugins, Microsoft Teams message extensions and Power Platform connectors. Image Credits: Microsoft Teams message extensions, which allows users interact with a web service through buttons and forms in Teams, aren’t new. Nor are Power Platform connectors, which act as a wrapper around an API that allows the underlying service to “talk’ to apps in Microsoft’s Power Platform portfolio (e.g. Power Automate). But Microsoft’s expanding their reach, letting developers tap new and existing message extensions and connectors to extend Microsoft 365 Copilot, the company’s assistant feature for Microsoft 365 apps and services like Word, Excel and PowerPoint. For instance, Power Platform connectors can be used to import structured data into the “Dataverse,” Microsoft’s service that stores and manages data used by internal business apps, that Microsoft 365 Copilot can then access. In a demo during Build, Microsoft showed how Dentsu, a public relations firm, tapped Microsoft 365 Copilot together with a plugin for Jira and data from Atlassian’s Confluence without having to write new code. Microsoft says that developers will be able to create and debug their own plugins in a number of ways, including through its Azure AI family of apps, which is adding capabilities to run and test plugins on private enterprise data. Azure OpenAI Service, Microsoft’s managed, enterprise-focused product designed to deliver businesses access to OpenAI’s technologies with added governance features, will also support plugins. And Teams Toolkit for Visual Studio will gain features for piloting plugins. Transitioning to a platformAs for how they’ll be distributed, Microsoft says that developers will be able to configure, publish and manage plugins through the Developer Portal for Teams, among other places. They’ll also be able to monetize them, although the company wasn’t clear on how, exactly, pricing will work. In any case, with plugins, Microsoft’s playing for keeps in the highly competitive generative AI race. Plugins transform the company’s “copilots” into aggregators, essentially — putting them on a path to becoming one-stop-shops for both enterprise and consumer customers. Microsoft no doubt perceives the lock-in opportunity as increasingly key as the company faces competitive pressure from startups and tech giants alike building generative AI, including Google and Anthropic. One could imagine plugins becoming a lucrative new source of revenue down the line as apps and services rely more and more on generative AI. And it could allay the fears of businesses who claim generative AI trained on their data violates their rights; Getty Images and Reddit, among others, have taken steps to prevent companies from training generative AI on their data without some form of compensation. I’d expect rivals to answer Microsoft’s and OpenAI’s plugins framework with plugins frameworks of their own. But Microsoft has a first-mover advantage, as OpenAI had with ChatGPT. And that can’t be underestimated. Microsoft goes all in on plugins for AI apps by Kyle Wiggers originally published on TechCrunch UP NEXT Brad Smith, the president and vice chair of Microsoft Corporation, said in an interview that aired Sunday on "Face the Nation" that he expects the U.S. government to regulate artificial intelligence in the year ahead. The European Union and China have already crafted national strategies but the U.S. has yet to do so. "I was in Japan just three weeks ago, and they have a national A.I. strategy. The government has adopted it," Smith said. "The world is moving forward. Let's make sure that the United States at least keeps pace with the rest of the world." "Artificial intelligence" is an umbrella term for computer systems which are able to perform tasks that require human intelligence, and includes technology used in familiar devices such as Siri and a Roomba. Recently, A.I. systems capable of creating text, audio, and images have made headlines with the debut of chatbots like Google's Bard or ChatGPT-4, or image generators like Dall-E. Smith said he believes that the country needs standards on how A.I. generated content is regulated, especially concerning content that mimics human beings. Last week, a deepfake image circulated online of an explosion near the Pentagon that potentially partially created by AI. Although the images were quickly debunked, it did move markets, "Face the Nation" moderator Margaret Brennan noted. Smith said "we'll need a system that we and so many others have been working to develop that protects content, that puts a watermark on it so that if somebody alters it, if somebody removes the watermark, if they do that to try to deceive or defraud someone, first of all, they're doing something that the law makes unlawful." But as Brennan noted, Washington is coming into a presidential election year — and these deepfake images could impact the election. A recent political attack ad which used A.I.-generated images to depict an imagined dystopian future. The ad, released by the Republican National Committee, mimics a news report from 2024 after the presidential election. It shows images created by artificial intelligence China invading Taiwan, businesses boarded up, and images of President Joe Biden and Vice President Kamala Harris celebrating being reelected. "Well, I think there is an opportunity to take real steps in 2023, so that we have guardrails in place for 2024," Smith said. "So that we are identifying in my view, especially when we're seeing foreign cyber influence operations from a Russia or China or Iran, that is pumping out information that they know is false and is designed to deceive, including using artificial intelligence. And that will require the tech sector coming together with government and it really will require more than one government." On Sunday, CBS News cybersecurity expert and analyst Chris Krebs told "Face that Nation" that it's "well past the time that the U.S. government needs to rethink how it engages and creates market interventions on technology, cyber disinformation and beyond. "AI is probably that kind of forcing function that will push us there," said Krebs, who is the former director the Cybersecurity and Infrastructure Security Agency. "Government is not keeping pace with technological development and the harms that we're seeing in society." In Congress, Democratic Sens. Michael Bennet of Colorado and Peter Welch of Vermont have proposed legislation to create a commission tasked to regulate the artificial intelligence industry and ensure it is safe and accessible to American citizens. Earlier this month, the White House announced new initiatives promoting responsible innovation in A.I. Smith said that Microsoft is specifically focusing on how news organizations can protect its content, and how candidates and campaigns can protect the cybersecurity of their operations. He also told Brennan that Microsoft has been working with the White House to answer their questions. "They, and really people across Washington D.C. fundamentally in both political parties, are asking the same questions," Smith said. "What does this mean for the future of my job? What does it mean for the future of school for my kids? Fundamentally, we're all asking ourselves, how do we get the good out of this and put in place the kinds of guardrails to protect against the risks that may be creating." Smith said that while existing laws need to be applied to A.I., he believes the country would benefit from a new framework to regulate artificial intelligence specifically. "When it comes to the protection of the nation's security. I do think we would benefit from a new agency, a new licensing system, something that would ensure not only that these models are developed safely, but they're deployed in, say, large data centers, where they can be protected from cybersecurity, physical security and national security threats," Smith said. Krebs said the industry pushing for regulation shows it's "concerned" and "worried," "but they're also looking for, I think, a little bit of protection." Brennan said Stability AI's CEO said AI is going to be a "bigger disruption than the pandemic," and the head of one of the largest teachers unions in the country has asked what it means for education. Smith has suggested math exams could be AI, which as Brennan noted, will cost jobs. "Well, actually think about the shortage of teachers we have, and the shortage of time for the teachers we have," Smith. "What would be better? To have a teacher sitting and grading a math exam, comparing the numbers with the table of the right answers or freeing that teacher up so they can spend more time with kids? So they can think about what they want to teach the next day. So they can use this technology to prepare more quickly and effectively for that class the next day." In creative industries, AI can build upon work that has already been done — so Brennan asked how compensation will be worked out? Smith said there are two different aspects in compensating people in creative industries. First, "will we live in where people who create things of value continue to get compensated for it?" He said the answer "is and should be yes" and "we'll have copyright and other intellectual property laws that continue to apply and make that a reality." But, he said, there is a "broader aspect" to the question of compensation, which is that AI will make "good" employees better, while "weaker" employees could be challenged. "What should excite us is the opportunity to use it to get better," Smith said. "Frankly, to eliminate things that are sort of drudgery. And yes, it will raise the bar. Life happens in that way. So let's all seize the moment, let's make the skilling opportunities broadly available. Let's make it easy. Let's even make it fun for people to learn." Smith said that A.I. will create and displace jobs over the next few years. "There will be some new jobs that will be created. There are jobs that exist today that didn't exist a year ago in this field," Smith said. "And there will be some jobs that are displaced. There always are." "I think we'll see it unfold over years, not months," Smith said. "But it will be years, not decades, although things will progress over decades as well. There will be some new jobs that will be created. There are jobs that exist today that didn't exist a year ago in this field. And there will be some jobs that are displaced. There always are. But I think for most of us, the way we work will change. This will be a new skill set, we'll need to, frankly, develop and acquire." Smith advised against a six-month pause on A.I. experimentation, something tech giants Elon Musk and Apple co-founder Steve Wozniak proposed in an open letter several months ago. "I think the more important question is, look, what's going to happen in six months, that's different from today? How would we use the six months to put in place the guardrails that would protect safety and the like? Well, let's go do that," Smith said. "Rather than slow down the pace of technology, which I think is extraordinarily difficult, I don't think China's going to jump on that bandwagon. Let's use six months to go faster." Windows 10’s May 2023 cumulative update, which fixed many issues in the operating system, has added a new banner promoting Microsoft Edge via Windows Search. The ad appears within the Windows Search panel and attempts to persuade people to use Edge as the default browser. This is the tech giant’s yet another attempt to push revamped Edge to more people. We have seen similar ads on Windows 11, and Microsoft is again pushing Edge on the older operating system. If you’ve installed Windows 10 update, you’ll see an advert when you open Windows Search. The ad says the browser was “built with your productivity in mind”. Additionally, there’s an option presented in the form of a button labelled “Apply”, which, when clicked, restores Microsoft’s recommended settings and sets Edge as the default browser. ![]() The ad could again irritate people, but it appears to be another A/B test, and Microsoft has already rolled it back. That’s because Windows Search is a frequently used feature, and such ads could potentially expose many users to the Microsoft Edge promotion. However, it’s also worth noting that this advert could come off as intrusive to some, particularly given its placement in a feature like Windows Search. Fortunately, it is possible to dismiss ads in Windows Search. To ignore the ad, click the ‘X’ option in the corner to close it. As mentioned above, Edge ads have already been turned off in Windows 10 via another server-side update, suggesting this was an A/B test in the operating system. While ads have been disabled, it does raise questions about the increasing presence of advertising in a paid operating system. It looks like ads or ‘recommendations’ (as Microsoft says) have become a regular part of the Windows 10 experience. These ads have also been spotted in Windows 11, and all promotions contain the same texts advertising Microsoft services. For example, one of the ads in Windows 11 pushed Outlook and Microsoft Edge. Another ad in Outlook app for iOS and Android recommended Edge as the safest solution for browsing email links on mobile platforms. Likewise, Microsoft confirmed a new update would automatically open Outlook desktop links in Edge unless you choose to keep your third-party browser as default. We’ll likely see more ads for Edge popping up in Windows 11 and 10, especially after the browser’s market share tanked and Safari jumped to the second spot. Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities. Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action. “Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.” During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service. “We’re now launching it as a product that third-party customers can use,” Bird said in a statement. Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. Bing Chat went off the rails when it first rolled out in preview; our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats and even shame them for admonishing it. In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design. Setting all that aside for a moment, Azure AI Content Safety — which protects against biased, sexist, racist, hateful, violent and self-harm content, according to Microsoft — is integrated into Azure OpenAI Service, Microsoft’s fully managed, corporate-focused product intended to deliver businesses access to OpenAI’s technologies with added governance and compliance features. But Azure AI Content Safety can also be applied to non-AI systems, such as online communities and gaming platforms. Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records. Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team, and Jigsaw, and succeeds Microsoft’s own Content Moderator tool. (No word on whether it was built on Microsoft’s acquisition of Two Hat, a moderation content provider, in 2021.) Those services, like Azure AI Content Safety, offer a score from zero to 100 on how similar new comments and images are to others previously identified as toxic. But there’s reason to be skeptical of them. Beyond Bing Chat’s early stumbles and Microsoft’s poorly targeted layoffs, studies have shown that AI toxicity detection tech still struggles to overcome challenges, including biases against specific subsets of users. Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters. The problem extends beyond toxicity-detectors-as-a-service. This week, a New York Times report revealed that eight years after a controversy over Black people being mislabeled as gorillas by image analysis software, tech giants still fear repeating the mistake. Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there are differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who don’t identify as either of those two groups. To combat some of these issues, Microsoft allows the filters in Azure AI Content Safety to be fine-tuned for context. Bird explains:
“We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context,” a Microsoft spokesperson added. “We then trained the AI models to reflect these guidelines … AI will always make some mistakes, [however,] so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.” One early adopter of Azure AI Content Safety is Koo, a Bangalore, India-based blogging platform with a user base that speaks over 20 languages. Microsoft says it’s partnering with Koo to tackle moderation challenges like analyzing memes and learning the colloquial nuances in languages other than English. We weren’t offered the chance to test Azure AI Content Safety ahead of its release, and Microsoft didn’t answer questions about its annotation or bias mitigation approaches. But rest assured we’ll be watching closely to see how Azure AI Content Safety performs in the wild. OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration, on ... [+] NurPhoto via Getty ImagesIn a blog post published last week to mark Global Accessibility Awareness Day, Microsoft’s Chief Accessibility Officer Jenny Lay-Flurrie talked enthusiastically about the promise generative Ai holds for delivering a new paradigm in technical support for blind users. Since 2018, the tech giant has partnered with leading virtual sighted assistance provider BeMyEyes to process technical support tickets for visually impaired customers contacting its Disability Answer Desk. Users with sight restrictions can connect with Microsoft’s technical support team via video call to receive visual interpretation services related to troubleshooting technical issues such as a laptop requiring a reboot or the installation of software updates. Now, powered by OpenAI’s super-advanced generative AI chatbot ChatGPT 4, users will be able to accomplish the same using Ai rather than direct one-to-one human communication. The novel Ai interaction will operate through the user feeding images through the BeMyEyes app which can then be analyzed by ChatGPT to identify objects and images as well as decipher text. What sets ChatGPT apart from other image description services is its ability to sustain a back-and-forth human-like conversation with the user and answer questions directly as well as provide contextual feedback and advice. This may include whether a user’s hardware meets the specifications for certain software and how to optimize the set-up based on individual user preferences. The new service is known as Virtual Volunteer and accompanying Microsoft on the corporate beta test program are the likes of Hilton, P&G, Sony, and The National Federation of the Blind. Commenting on the initiative in her blog post, Lay-Flurry said: “Over the past few months, the world has been captivated by the promise of generative AI. Accessibility is important to deliver inclusive products and is a key part of the Responsible AI principles. Responsible AI is accessible AI. To keep accessibility at the heart of generative AI, we have three grounding principles: AI should be accessible, representative of people with disabilities and innovate to open doors. Great to see examples of where generative AI can change paradigms for disabled people. Starting with BeMyEyes.” For visual interpretation services in particular, there is not just a shift in technological paradigm at play but a psychological one too. On the face of it, sighted individuals may consider Ai assistance to be a downgrade from that of humans. After all, generative Ai tools such as OpenAI’s ChatGPT may be evolving at an explosive rate but are certainly not on a par with the interpersonal skills of human beings or our abilities to detect subtleties and nuance. Nevertheless, while sighted human assistance, be it direct or virtual, from friends and family or service providers, certainly has its place – there is a great deal to be said of for automated solutions too. The former smacks of dependency on others which carries with it certain connotations whilst the latter just sounds like another piece of assistive tech in the toolbox for getting the job done independently and there’s a great deal to be said for that! | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
AZ-220 certification | AZ-220 test Questions | AZ-220 test | AZ-220 Free PDF | AZ-220 teaching | AZ-220 Topics | AZ-220 Free PDF | AZ-220 study help | AZ-220 test | AZ-220 resources | | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Killexams test Simulator Killexams Questions and Answers Killexams Exams List Search Exams |