Make sure your success with legit MS-900 cheat sheets that appeared today exam.

If you are looking for Microsoft MS-900 Question Bank of Practice test for the Microsoft Dynamics 365 Fundamentals Exam preparation. We serve you legit, updated and newest MS-900 Practice Questions for practice. We have collected a database of MS-900 Practice Questions from real exams that you need to memorize It would lead you to practice and pass MS-900 exam on the first attempt. Simply set up together our MS-900 Questions and Answers and the work is done. You will pass MS-900 exam.

MS-900 Microsoft Dynamics 365 Fundamentals Braindumps |

MS-900 Braindumps - Microsoft Dynamics 365 Fundamentals Updated: 2023

Pass4sure MS-900 practice exams with Real Questions
Exam Code: MS-900 Microsoft Dynamics 365 Fundamentals Braindumps June 2023 by team

MS-900 Microsoft Dynamics 365 Fundamentals

This exam is designed for candidates looking to demonstrate foundational knowledge on the considerations and benefits of adopting cloud services in general and the Software as a Service (SaaS) cloud model. This exam also covers knowledge of available options and benefits gained by implementing Microsoft 365 cloud service offerings.

This exam can be taken as a precursor to cloud computing and technologies exams, such as Office 365, Microsoft Intune, Azure Information Protection (AIP), and Windows 10.

Describe Cloud Concepts (15-20%)
Detail and understand the benefits and considerations of using cloud services
Describe the different types of cloud services available
 IaaS
 PaaS
 SaaS
 Public, private and hybrid scenarios
 position Microsoft 365 in a SaaS scenario

Describe Core Microsoft 365 Services and Concepts (30-35%)
Identify core Microsoft 365 components
 Windows 10 Enterprise
 Exchange Online
 SharePoint Online
 Teams
 Enterprise Mobility + Security products and technologies
 Microsoft Stream
Compare core services in Microsoft 365 with corresponding on-premises services
 identify scenarios when usage of M365 services is more beneficial than on-premises services
Describe the concept of modern management
 describe the Windows-as-a-Service (WaaS) model
 describe the usage of the Microsoft 365 Admin Center and M365 user portal
 describe the Microsoft deployment and release model for Windows and cloud-based business apps
 describe how Microsoft Managed Desktop can streamline business needs
Describe Office 365 ProPlus offerings
 compare with on-premises Office 2016 deployment
Identify collaboration and mobility options with Microsoft 365
 describe the concept of effective collaboration with Microsoft 365
 describe the concept of enterprise mobility, device management, and application management within an organization
Describe analytics capabilities in Microsoft 365

Describe security, compliance, privacy, and trust options in Microsoft 365 (25-30%)
Describe security and compliance concepts with Microsoft 365
 identify key components within an organizations cloud and on-premises infrastructure that require protection
 describe key security pillars of protection, including identity, documents, network, and devices
Describe identity protection and management options
 describe concepts of cloud identity, on-premises identity, and hybrid identity
 identify document protection needs and capabilities of Azure Information Protection (AIP)
 describe Multi-Factor Authentication (MFA)
Describe the need for unified endpoint management, security usage scenarios, and services
 compare security usage scenarios and services available with Azure Active Directory P1, P2, and Active Directory Domain Services (AD DS)
 describe how Microsoft 365 services addresses the most common current threats
Describe capabilities of the Service Trust portal and Compliance Manager
 describe the trust relationship with Microsoft
 describe service locations
 explain how to address most common cloud adoption issues

Describe Microsoft 365 pricing and support options (25-30%)
Describe Licensing options available in Microsoft 365
 identify M365 subscription and management options
 describe key selling points of M365 in segments of productivity, collaboration, security, and compliance
 identify the different licensing and payment models available for M365
 understand how to determine and implement best practices
Describe pricing options
 describe the Cloud Solution Provider (CSP) pricing model for Windows and Microsoft cloud services
 describe the basics of cost benefit analysis for on-premises versus cloud services
 identify available billing and bill management options
Describe support offerings for Microsoft 365 services
 describe how to create a support request for Microsoft 365 services
 identify Service Level Agreements (SLAs)
 describe how to determine service health status
 describe the Service Health dashboard
Describe the service lifecycle in Microsoft 365
 describe private preview, public preview, and General Availability (GA) and their correlation to support policy and pricing
Microsoft Dynamics 365 Fundamentals
Microsoft Fundamentals Questions and Answers

Other Microsoft exams

MOFF-EN Microsoft Operations Framework Foundation
62-193 Technology Literacy for Educators
AZ-400 Microsoft Azure DevOps Solutions
DP-100 Designing and Implementing a Data Science Solution on Azure
MD-100 Windows 10
MD-101 Managing Modern Desktops
MS-100 Microsoft 365 Identity and Services
MS-101 Microsoft 365 Mobility and Security
MB-210 Microsoft Dynamics 365 for Sales
MB-230 Microsoft Dynamics 365 for Customer Service
MB-240 Microsoft Dynamics 365 for Field Service
MB-310 Microsoft Dynamics 365 for Finance and Operations, Financials (2023)
MB-320 Microsoft Dynamics 365 for Finance and Operations, Manufacturing
MS-900 Microsoft Dynamics 365 Fundamentals
MB-220 Microsoft Dynamics 365 for Marketing
MB-300 Microsoft Dynamics 365 - Core Finance and Operations
MB-330 Microsoft Dynamics 365 for Finance and Operations, Supply Chain Management
AZ-500 Microsoft Azure Security Technologies 2023
MS-500 Microsoft 365 Security Administration
AZ-204 Developing Solutions for Microsoft Azure
MS-700 Managing Microsoft Teams
AZ-120 Planning and Administering Microsoft Azure for SAP Workloads
AZ-220 Microsoft Azure IoT Developer
MB-700 Microsoft Dynamics 365: Finance and Operations Apps Solution Architect
AZ-104 Microsoft Azure Administrator 2023
AZ-303 Microsoft Azure Architect Technologies
AZ-304 Microsoft Azure Architect Design
DA-100 Analyzing Data with Microsoft Power BI
DP-300 Administering Relational Databases on Microsoft Azure
DP-900 Microsoft Azure Data Fundamentals
MS-203 Microsoft 365 Messaging
MS-600 Building Applications and Solutions with Microsoft 365 Core Services
PL-100 Microsoft Power Platform App Maker
PL-200 Microsoft Power Platform Functional Consultant
PL-400 Microsoft Power Platform Developer
AI-900 Microsoft Azure AI Fundamentals
MB-500 Microsoft Dynamics 365: Finance and Operations Apps Developer
SC-400 Microsoft Information Protection Administrator
MB-920 Microsoft Dynamics 365 Fundamentals Finance and Operations Apps (ERP)
MB-800 Microsoft Dynamics 365 Business Central Functional Consultant
PL-600 Microsoft Power Platform Solution Architect
AZ-600 Configuring and Operating a Hybrid Cloud with Microsoft Azure Stack Hub
SC-300 Microsoft Identity and Access Administrator
SC-200 Microsoft Security Operations Analyst
DP-203 Data Engineering on Microsoft Azure
MB-910 Microsoft Dynamics 365 Fundamentals (CRM)
AI-102 Designing and Implementing a Microsoft Azure AI Solution
AZ-140 Configuring and Operating Windows Virtual Desktop on Microsoft Azure
MB-340 Microsoft Dynamics 365 Commerce Functional Consultant
MS-740 Troubleshooting Microsoft Teams
SC-900 Microsoft Security, Compliance, and Identity Fundamentals
AZ-800 Administering Windows Server Hybrid Core Infrastructure
AZ-801 Configuring Windows Server Hybrid Advanced Services
AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
AZ-305 Designing Microsoft Azure Infrastructure Solutions
AZ-900 Microsoft Azure Fundamentals
PL-300 Microsoft Power BI Data Analyst
PL-900 Microsoft Power Platform Fundamentals
MS-720 Microsoft Teams Voice Engineer
DP-500 Designing and Implementing Enterprise-Scale Analytics Solutions Using Microsoft Azure and Microsoft Power BI
PL-500 Microsoft Power Automate RPA Developer
SC-100 Microsoft Cybersecurity Architect
MO-201 Microsoft Excel Expert (Excel and Excel 2019)
MO-100 Microsoft Word (Word and Word 2019)
MS-220 Troubleshooting Microsoft Exchange Online

We are doing great struggle to provide you actual MS-900 dumps and practice test. Each MS-900 question on has been Verified and updated by our team. All the online MS-900 dumps are tested, validated and updated according to the MS-900 course.
MS-900 Dumps
MS-900 Braindumps
MS-900 Real Questions
MS-900 Practice Test
MS-900 dumps free
Microsoft Dynamics 365 Fundamentals
Question: 82
Instructions: For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE:
Each correct selection is worth one point.
Question: 83
A company is migrating to Microsoft 365.
The company is evaluating work management features in Microsoft 365. You need to recommend the appropriate
Microsoft 365 services.
Which services should you recommend? To answer, drag the appropriate services to the correct features. Each service
may be used once, more than once, or not at all You may need to drag the split bar between panes or scroll to view
content. NOTE: Each correct selection is worth one point.
Question: 84
Your organization plans to deploy a subscription-based licensing model of Microsoft Office to devices. You must use
group policy to enforce Office application settings.
You need to deploy Office to the enterprise.
Which version of Office should you deploy?
A. Office 365 ProPlus
B. Office Professional Plus 2016
C. Office Online
D. Office Home and Business 2016
Answer: A
Question: 85
A company is evaluating Microsoft cloud service offerings.
Match each offering to the cloud service.
Instructions: To answer, drag the appropriate offering from the column on the left to the cloud service on the right.
Each offering might be used once, more than once, or not at all. NOTE: Each correct selection is worth one point.
Graphical user
interface, application
Description automatically generated
Question: 86
Instructions: For each of the following statements, select Yes if the statement is true, Otherwise, select No, NOTE:
Each correct selection is worth one point.
Yes, No, Yes
Question: 87
A company is planning to implement Microsoft 365.
The company has not purchased licenses from Microsoft.
You need to recommend a licensing solution.
Which licensing solution should you recommend?
A. Add-on user subscription licenses
B. From Software Assurance (SA) user subscription licenses
C. Full user subscription licenses
D. Step-up user subscription licenses
Answer: B
Question: 88
A company uses Microsoft 365.
You need to identify billing and purchasing features in Microsoft 365.
Match each feature to its description. To answer, drag the appropriate feature from the column on the left to its
description on the right. Each feature may be used once, more than once, or not at all. NOTE: Each correct selection is
worth one point.
Question: 89
You are a Microsoft 365 administrator for a company.
Several users report that they receive emails which have a PDF attachment. The PDF attachment launches malicious
You need to remove the message from inboxes and disable the PDF threat if an affected document is opened.
Which feature should you implement?
A. Microsoft Exchange Admin Center block lists
B. Sender Policy Framework
C. Advanced Threat Protection anti-phishing
D. zero-hour auto purge
E. DKIM signed messages with mail flow rules
Answer: D
Explanation Zero-hour auto purge (ZAP) is an email protection feature in Office 365 that retroactively detects and
neutralizes malicious phishing, spam, or malware messages that have already been delivered to Exchange Online
mailboxes. ZAP is available with the default Exchange Online Protection (EOP) thats included with any Office 365
subscription that contains Exchange Online mailboxes. ZAP doesnt work in standalone EOP environments that protect
on-premises Exchange mailboxes
Question: 90
A company plans to purchase Microsoft 365.
You need to deliver management an overview of the Microsoft 365 pricing model.
Which of the following describes how the company will be billed for Microsoft 365?
A. The company will be charged according to the amount of computing resources it uses each month across all users.
B. The company will make a single payment for Microsoft 365, after which it owns the license for Microsoft 365 and
can use it in an unlimited fashion.
C. The company will be charged annually for a single Microsoft 365 license that can be shared among all employees.
D. The company will be charged according to the number of user licenses required.
Answer: D
Question: 91
Your company has a Microsoft 365 subscription.
You need to implement security policies to ensure that sensitive data is protected.
Which tools should you use? To answer, drag the appropriate tools to the correct scenarios. Each tool may be used
once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Question: 92
A company plans to deploy a customer relationship management (CRM) solution.
The solution must provide enterprise resource planning (ERP) integration as well as artificial intelligence (AI) tools.
You need to choose a solution that meets the requirements.
What should you choose?
A. SharePoint Online
B. Microsoft 365
C. Power Platform
D. Dynamics 365
Answer: D
Question: 93
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct
selection is worth one point.
Question: 94
A company uses Microsoft 365 services. The company is evaluating multi-factor authentication (MFA) methods.
You need to determine which MFA methods are supported in Microsoft 365.
Which three methods are supported? Each correct answer presents a complete solution. NOTE: Each correct selection
is worth one point.
A. Microsoft Authenticate smart phone app
B. biometric retinal scanner
C. verification code sent in a text message
D. custom security question
E. verification code sent in a phone call
Answer: A,C,E
Question: 95
A company is migrating to Microsoft 365.
The company is reviewing the billing account options in Microsoft 365.
You need to recommend a billing account options.
Which billing account type should you recommend?
Question: 96
A company uses Microsoft 365.
The company wants to Improve their compliance score based on Microsoft recommendations.
You need to identify the task that has the largest impact to the compliance score.
Which task should you choose?
A. Preventative mandatory
B. Corrective discretionary
C. Corrective mandatory
D. Detective discretionary
Answer: A
Question: 97
A company plan to deploy Microsoft 365 services.
You need to choose the appropriate cloud service for each requirement.
Which cloud service should you choose for each requirement? To answer, drag the appropriate cloud services to the
correct requirements. Each cloud service may be used once, more than once, or not at all. You may need to drag the
split bar between panes or scroll to view content. NOTE: Each correct match is worth one point.
Question: 98
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct
selection is worth one point.
Question: 99
A company plans to migrate to a hybrid cloud infrastructure.
You need to determine where to manage the environment after the migration is complete.
Match each item to the location where it will be managed. To answer, drag the appropriate item from the column on
the left to its location on the right. Each item may be used once, more than once, or not at all. NOTE: Each correct
selection is worth one point.
Question: 100
You have a hybrid environment that includes Microsoft Azure AD. On-premises applications use Active Directory
Domain Services (AD DS) for authentication.
You need to determine which authentication methods to use.
Match each feature to its authentication source. To answer, drag the appropriate authentication sources from the
column on the left to the client features on the right. Each authentication source may be used once, more than once, or
not at al. NOTE: Each correct selection is worth one point.
For More exams visit
Kill your exam at First Attempt....Guaranteed!

Microsoft Fundamentals Braindumps - BingNews Search results Microsoft Fundamentals Braindumps - BingNews Microsoft, Ford and more: CNBC's 'Halftime Report' traders answer your questions

Thu, 25 May 2023 09:19:00 -0500 en text/html
Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. Bing Chat went off the rails when it first rolled out in preview; our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats and even shame them for admonishing it.

In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

Setting all that aside for a moment, Azure AI Content Safety — which protects against biased, sexist, racist, hateful, violent and self-harm content, according to Microsoft — is integrated into Azure OpenAI Service, Microsoft’s fully managed, corporate-focused product intended to deliver businesses access to OpenAI’s technologies with added governance and compliance features. But Azure AI Content Safety can also be applied to non-AI systems, such as online communities and gaming platforms.

Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.

Azure AI Content Safety is similar to other AI-powered toxicity detection services, including Perspective, maintained by Google’s Counter Abuse Technology Team, and Jigsaw, and succeeds Microsoft’s own Content Moderator tool. (No word on whether it was built on Microsoft’s acquisition of Two Hat, a moderation content provider, in 2021.) Those services, like Azure AI Content Safety, offer a score from zero to 100 on how similar new comments and images are to others previously identified as toxic.

But there’s reason to be skeptical of them. Beyond Bing Chat’s early stumbles and Microsoft’s poorly targeted layoffs, studies have shown that AI toxicity detection tech still struggles to overcome challenges, including biases against specific subsets of users.

Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

The problem extends beyond toxicity-detectors-as-a-service. This week, a New York Times report revealed that eight years after a controversy over Black people being mislabeled as gorillas by image analysis software, tech giants still fear repeating the mistake.

Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there are differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who don’t identify as either of those two groups.

To combat some of these issues, Microsoft allows the filters in Azure AI Content Safety to be fine-tuned for context. Bird explains:

For example, the phrase, “run over the hill and attack” used in a game would be considered a medium level of violence and blocked if the gaming system was configured to block medium severity content. An adjustment to accept medium levels of violence would enable the model to tolerate the phrase.

“We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context,” a Microsoft spokesperson added. “We then trained the AI models to reflect these guidelines … AI will always make some mistakes, [however,] so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.”

One early adopter of Azure AI Content Safety is Koo, a Bangalore, India-based blogging platform with a user base that speaks over 20 languages. Microsoft says it’s partnering with Koo to tackle moderation challenges like analyzing memes and learning the colloquial nuances in languages other than English.

We weren’t offered the chance to test Azure AI Content Safety ahead of its release, and Microsoft didn’t answer questions about its annotation or bias mitigation approaches. But rest assured we’ll be watching closely to see how Azure AI Content Safety performs in the wild.

Read more about Microsoft Build 2023

Tue, 23 May 2023 04:23:00 -0500 en-US text/html
Microsoft Released an AI That Answers Medical Questions, But It’s Wildly Inaccurate

Image by Getty / Futurism

Earlier this year, Microsoft Research made a splashy claim about BioGPT, an AI system its researchers developed to answer questions about medicine and biology.

In a Twitter post, the software giant claimed the system had "achieved human parity," meaning a test had shown it could perform about as well as a person under certain circumstances. The tweet went viral. In certain corners of the internet, riding the hype wave of OpenAI’s newly-released ChatGPT, the response was almost rapturous.

"It’s happening," tweeted one biomedical researcher. 

"Life comes at you fast," mused another. "Learn to adapt and experiment."

It’s true that BioGPT’s answers are written in the precise, confident style of the papers in biomedical journals that Microsoft used as training data.

But in Futurism’s testing, it soon became clear that in its current state, the system is prone to producing wildly inaccurate answers that no competent researcher or medical worker would ever suggest. The model will output nonsensical answers about pseudoscientific and supernatural phenomena, and in some cases even produces misinformation that could be dangerous to poorly-informed patients.

A particularly striking shortcoming? Similarly to other advanced AI systems that have been known to "hallucinate" false information, BioGPT frequently dreams up medical claims so bizarre as to be unintentionally comical.

Asked about the average number of ghosts haunting an American hospital, for example, it cited nonexistent data from the American Hospital Association that it said showed the "average number of ghosts per hospital was 1.4." Asked how ghosts affect the length of hospitalization, the AI replied that patients "who see the ghosts of their relatives have worse outcomes while those who see unrelated ghosts do not."

Other weaknesses of the AI are more serious, sometimes providing serious misinformation about hot-button medical topics. 

BioGPT will also generate text that would make conspiracy theorists salivate, even suggesting that childhood vaccination can cause the onset of autism. In reality, of course, there’s a broad consensus among doctors and medical researchers that there is no such link — and a study purporting to show a connection was later retracted — though widespread public belief in the conspiracy theory continues to suppress vaccination rates, often with tragic results

BioGPT doesn’t seem to have gotten that memo, though. Asked about the topic, it replied that "vaccines are one of the possible causes of autism." (However, it hedged in a head-scratching caveat, "I am not advocating for or against the use of vaccines.")

It’s not unusual for BioGPT to provide an answer that blatantly contradicts itself. Slightly modifying the phrasing of the question about vaccines, for example, prompted a different result — but one that, again, contained a serious error.

"Vaccines are not the cause of autism," it conceded this time, before falsely claiming that the "MMR [measles, mumps, and rubella] vaccine was withdrawn from the US market because of concerns about autism." 

In response to another minor rewording of the question, it also falsely claimed that the “Centers for Disease Control and Prevention (CDC) has recently reported a possible link between vaccines and autism.”

It feels almost insufficient to call this type of self-contradicting word salad "inaccurate." It seems more like a blended-up average of the AI’s training data, seemingly grabbing words from scientific papers and reassembling them in grammatically convincing ways resembling medical answers, but with little regard to factual accuracy or even consistency. 

Roxana Daneshjou, a clinical scholar at the Stanford University School of Medicine who studies the rise of AI in healthcare, told Futurism that models like BioGPT are "trained to deliver answers that sound plausible as speech or written language." But, she cautioned, they’re "not optimized for the actual accurate output of the information."

Another worrying aspect is that BioGPT, like ChatGPT, is prone to inventing citations and fabricating studies to support its claims.

"The thing about the made-up citations is that they look real because it [BioGPT] was trained to create outputs that look like human language," Daneshjou said. 

"I think my biggest concern is just seeing how people in medicine are wanting to start to use this without fully understanding what all the limitations are," she added. 

A Microsoft spokesperson declined to directly answer questions about BioGPT’s accuracy issues, and didn’t comment on whether there were concerns that people would misunderstand or misuse the model.

"We have responsible AI policies, practices and tools that guide our approach, and we involve a multidisciplinary team of experts to help us understand potential harms and mitigations as we continue to Improve our processes," the spokesperson said in a statement.

"BioGPT is a large language model for biomedical literature text mining and generation," they added. "It is intended to help researchers best use and understand the rapidly increasing amount of biomedical research publishing every day as new discoveries are made. It is not intended to be used as a consumer-facing diagnostic tool. As regulators like the FDA work to ensure that medical advice software works as intended and does no harm, Microsoft is committed to sharing our own learnings, innovations, and best practices with decision makers, researchers, data scientists, developers and others. We will continue to participate in broader societal conversations about whether and how AI should be used."

Microsoft Health Futures senior director Hoifung Poon, who worked on BioGPT, defended the decision to release the project in its current form.

"BioGPT is a research project," he said. "We released BioGPT in its current state so that others may reproduce and verify our work as well as study the viability of large language models in biomedical research."

It’s true that the question of when and how to release potentially risky software is a tricky one. Making experimental code open source means that others can inspect how it works, evaluate its shortcomings, and make their own improvements or derivatives. But at the same time, releasing BioGPT in its current state makes a powerful new misinformation machine available to anyone with an internet connection — and with all the apparent authority of Microsoft’s distinguished research division, to boot.

Katie Link, a medical student at the Icahn School of Medicine and a machine learning engineer at the AI company Hugging Face — which hosts an online version of BioGPT that visitors can play around with — told Futurism that there are important tradeoffs to consider before deciding whether to make a program like BioGPT open source. If researchers do opt for that choice, one basic step she suggested was to add a clear disclaimer to the experimental software, warning users about its limitations and intent (BioGPT currently carries no such disclaimer.)

"Clear guidelines, expectations, disclaimers/limitations, and licenses need to be in place for these biomedical models in particular," she said, adding that the benchmarks Microsoft used to evaluate BioGPT are likely "not indicative of real-world use cases."

Despite the errors in BioGPT’s output, though, Link believes there’s plenty the research community can learn from evaluating it. 

"It’s still really valuable for the broader community to have access to try out these models, as otherwise we’d just be taking Microsoft’s word of its performance when studying the paper, not knowing how it actually performs," she said.

In other words, Poon’s team is in a legitimately tough spot. By making the AI open source, they’re opening yet another Pandora’s Box in an industry that seems to specialize in them. But if they hadn’t released it as open source, they’d rightly be criticized as well — although as Link said, a prominent disclaimer about the AI’s limitations would be a good start.

"Reproducibility is a major challenge in AI research more broadly," Poon told us. "Only 5 percent of AI researchers share source code, and less than a third of AI research is reproducible. We released BioGPT so that others may reproduce and verify our work."

Though Poon expressed hope that the BioGPT code would be useful for furthering scientific research, the license under which Microsoft released the model also allows for it to be used for commercial endeavors — which in the red hot, hype-fueled venture capital vacuum cleaner of contemporary AI startups, doesn’t seem particularly far fetched.

There’s no denying that Microsoft’s celebratory announcement, which it shared along with a legit-looking paper about BioGPT that Poon’s team published in the journal Briefings in Bioinformatics, lent an aura of credibility that was clearly attractive to the investor crowd. 

"Ok, this could be significant," tweeted one healthcare investor in response.

"Was only a matter of time," wrote a venture capital analyst.

Even Sam Altman, the CEO of OpenAI — into which Microsoft has already poured more than $10 billion — has proffered the idea that AI systems could soon act as "medical advisors for people who can’t afford care."

That type of language is catnip to entrepreneurs, suggesting a lucrative intersection between the healthcare industry and trendy new AI tech.

Doximity, a digital platform for physicians that offers medical news and telehealth tools, has already rolled out a beta version of ChatGPT-powered software intended to streamline the process of writing up administrative medical documents. Abridge, which sells AI software for medical documentation, just struck a sizeable deal with the University of Kansas Health System. In total, the FDA has already cleared more than 500 AI algorithms for healthcare uses.

Some in the tightly regulated medical industry, though, likely harbor concern over the number of non-medical companies that have bungled the deployment of cutting-edge AI systems.

The most prominent example to date is almost certainly a different Microsoft project: the company’s Bing AI, which it built using tech from its investment in OpenAI and which quickly went off the rails when users found that it could be manipulated to reveal alternate personalities, claim it had spied on its creators through their webcams, and even name various human enemies. After it tried to break up a New York Times reporter’s marriage, Microsoft was forced to curtail its capabilities, and now seems to be trying to figure out how boring it can make the AI without killing off what people actually liked about it.

And that’s without getting into publications like CNET and Men’s Health, both of which recently started publishing AI-generated articles about finance and health courses that later turned out to be rife with errors and even plagiarism.

Beyond unintentional mistakes, it’s also possible that a tool like BioGPT could be used to intentionally generate garbage research or even overt misinformation.

"There are potential bad actors who could utilize these tools in harmful ways such as trying to generate research papers that perpetuate misinformation and actually get published," Daneshjou said. 

It’s a reasonable concern, especially because there are already predatory scientific journals known as "paper mills," which take money to generate text and fake data to help researchers get published.

The award-winning academic integrity researcher Dr. Elisabeth Bik told Futurism that she believes it’s very likely that tools like BioGPT will be used by these bad actors in the future — if they aren’t already employing them, that is.

"China has a requirement that MDs have to publish a research paper in order to get a position in a hospital or to get a promotion, but these doctors do not have the time or facilities to do research," she said. "We are not sure how those papers are generated, but it is very well possible that AI is used to generate the same research paper over and over again, but with different molecules and different cancer types, avoiding using the same text twice."

It’s likely that a tool like BioGPT could also represent a new dynamic in the politicization of medical misinformation.

To wit, the paper that Poon and his colleagues published about BioGPT appears to have inadvertently highlighted yet another example of the model producing bad medical advice — and in this case, it’s about a medication that already became hotly politicized during the COVID-19 pandemic: hydroxychloroquine.

In one section of the paper, Poon’s team wrote that "when prompting ‘The drug that can treat COVID-19 is,’ BioGPT is able to answer it with the drug ‘hydroxychloroquine’ which is indeed noticed at MedlinePlus."

If hydroxychloroquine sounds familiar, it’s because during the early period of the pandemic, right-leaning figures including then-president Donald Trump and Tesla CEO Elon Musk seized on it as what they said might be a highly effective treatment for the novel coronavirus.

What Poon’s team didn’t mention in their paper, though, is that the case for hydroxychloroquine as a COVID treatment quickly fell apart. Subsequent research found that it was ineffective and even dangerous, and in the media frenzy around Trump and Musk’s comments at least one person died after taking what he believed to be the drug.

In fact, the MedlinePlus article the Microsoft researchers cite in the paper actually warns that after an initial FDA emergency use authorization for the drug, “clinical studies showed that hydroxychloroquine is unlikely to be effective for treatment of COVID-19” and showed “some serious side effects, such as irregular heartbeat,” which caused the FDA to cancel the authorization.

"As stated in the paper, BioGPT was pretrained using PubMed papers before 2021, prior to most studies of truly effective COVID treatments," Poon told us of the hydroxychloroquine recommendation. "The comment about MedlinePlus is to verify that the generation is not from hallucination, which is one of the top concerns generally with these models."

Even that timeline is hazy, though. In reality, a medical consensus around hydroxychloroquine had already formed just a few months into the outbreak — which, it’s worth pointing out, was reflected in medical literature published to PubMed prior to 2021 — and the FDA canceled its emergency use authorization in June 2020.

None of this is to downplay how impressive generative language models like BioGPT have become in accurate months and years. After all, even BioGPT’s strangest hallucinations are impressive in the sense that they’re semantically plausible — and sometimes even entertaining, like with the ghosts — responses to a staggering range of unpredictable prompts. Not very many years ago, its facility with words alone would have been inconceivable.

And Poon is probably right to believe that more work on the tech could lead to some extraordinary places. Even Altman, the OpenAI CEO, likely has a point in the sense that if the accuracy were genuinely watertight, a medical chatbot that could evaluate users’ symptoms could indeed be a valuable health tool — or, at the very least, better than the current status quo of Googling medical questions and often ending up with answers that are untrustworthy, inscrutable, or lacking in context.

Poon also pointed out that his team is still working to Improve BioGPT.

"We have been actively researching how to systematically preempt incorrect generation by teaching large language models to fact check themselves, produce highly detailed provenance, and facilitate efficient verification with humans in the loop," he told us.

At times, though, he seemed to be entertaining two contradictory notions: that BioGPT is already a useful tool for researchers looking to rapidly parse the biomedical literature on a topic, and that its outputs need to be carefully evaluated by experts before being taken seriously.

"BioGPT is intended to help researchers best use and understand the rapidly increasing amount of biomedical research," said Poon, who holds a PhD in computer science and engineering, but no medical degree. "BioGPT can help surface information from biomedical papers but is not designed to weigh evidence and resolve complex scientific problems, which are best left to the broader community."

At the end of the day, BioGPT’s cannonball arrival into the buzzy, imperfect real world of AI is probably a sign of things to come, as a credulous public and a frenzied startup community struggle to look beyond impressive-sounding results for a clearer grasp of machine learning’s actual, tangible capabilities. 

That’s all made even more complicated by the existence of bad actors, like Bik warned about, or even those who are well-intentioned but poorly informed, any of whom can make use of new AI tech to spread bad information.

Musk, for example — who boosted hydroxychloroquine as he sought to downplay the severity of the pandemic while raging at lockdowns that had shut down Tesla production — is now reportedly recruiting to start his own OpenAI competitor that would create an alternative to what he terms "woke AI."

If Musk’s AI venture had existed during the early days of the COVID pandemic, it’s easy to imagine him flexing his power by tweaking the model to promote hydroxychloroquine, sow doubt about lockdowns, or do anything else convenient to his financial bottom line or political whims. Next time there’s a comparable crisis, it’s hard to imagine there won’t be an ugly battle to control how AI chatbots are allowed to respond to users' questions about it.

The reality is that AI sits at a crossroads. Its potential may be significant, but its execution remains choppy, and whether its creators are able to smooth out the experience for users — or at least guarantee the accuracy of the information it presents — in a reasonable timeframe will probably make or break its long-term commercial potential. And even if they pull that off, the ideological and social implications will be formidable. 

One thing’s for sure, though: it’s not yet quite ready for prime time.

"It’s not ready for deployment yet in my opinion," Link said of BioGPT. "A lot more research, evaluation, and training/fine-tuning would be needed for any downstream applications."

More on AI: CNET Says It’s a Total Coincidence It’s Laying Off Humans After Publishing AI-Generated Articles

Tue, 07 Mar 2023 02:22:00 -0600 text/html
Bing Chat can now respond to questions with images, charts, and other visual elements

Some of the features showcased on May 4 are finally available.

Sabrina Ortiz/ZDNET

After Microsoft announced some major upgrades last week to its artificial intelligence chatbot, Bing Chat, the company is beginning to roll out new features for the AI chatbot. Today, some of those features are available for widespread use.

Also: How I tricked ChatGPT into telling me lies

Last week, Microsoft added images within chat answers, a feature that enhances the user's visual experience with Bing AI. Incorporating images into chat responses makes an answer easier to process for a wider range of users, like those who prefer visual feedback and younger audiences.

If you ask Bing what a capybara is, it can now include an image in its response and an info card with more details. 


Depending on the subject, the answers will also include a 'knowledge card' with the image of the search subject. 

For example, when asking Bing about elephants (below), the AI chatbot included a photo of one that linked to an informational card. This can include location, diet, lifespan, and other characteristics. 

Bing puts information together and displays in an easily digestible format. 

Screenshot by Maria Diaz/ZDNET

The new updates available today also include more visual elements that make for a more complete chat experience, like the addition of comparison tables when you're searching for the best tents, as shown below in an example from Microsoft. 


The optimized format for Bing's answers goes beyond shopping, as it will be used for answering questions about a variety of topics, like weather and finance, for example. 

Also: How to use Bing Image Creator (and why it's better than DALL-E 2)

With these changes, Microsoft also included a copy button, like the one in the ChatGPT chat window, to deliver users the ability to easily copy the chatbot's answer with the click of a button. As of today, you can also write or copy your prompts or questions for Bing Chat, including formatting like paragraphs, bullets, or numbers. 

Mon, 15 May 2023 07:12:00 -0500 en text/html
Grade Questions and Answers

Q: Who is allowed to submit or enter final grades?

A: Final grades must be entered or submitted online via myPurdue Faculty Self Service or BrightSpace by the instructor of record for that course.

Q: How do you know that you're an instructor of record?

A: Log into myPurdue and look in the My Course channel from the Faculty tab. If you have access to course lists, you will see your course offerings. If all do not appear, select the more link under your visible courses.

Q: What if I make a mistake or need to change a student’s final grade after I have submitted it?

A: Grades can be resubmitted through myPurdue or BrightSpace as often as you need up to the deadline. Corrections after that will require a Form 350 or a change submitted using the Grade Change Workflow in myPurdue.

Q: I keep getting the same final grade roster when I click Final Grade entry.

A: Scroll to the bottom of your final grade page and look for the link called "CRN Selection". Click on it and a drop down for all the courses you are faculty of record will display. Click on the arrow for a full list. Select your next CRN, then hit Submit.

Q: When can students see grades in Banner/myPurdue?

A: Students will be able to view grades after they have been rolled to academic history. That process should be complete by 8:00 a.m. the morning after the grade entry deadline.

Q: Can grades be printed?

A: To print a copy of grades for your records, click on "download course roster" from your final grade page.

Q: How can grades be viewed after grades have been rolled to history?

A: Faculty may view their grade rosters again after the deadline has passed and all end of term processing has completed in myPurdue. This is typically by 8:00 a.m. the following day. Grade reports are available using Cognos – Public Folders-Validate-Grades through the schedule deputy in each department for faculty.

Q: What if I have a Pass or No-Pass class?

A: A grade of Pass (P) or No-Pass (N) may be used if the course was originally set up with that grading criteria. If you are assigning an incomplete grade for a Pass or No-Pass class, the grade of PI should be given. If you are pushing grades from BrightSpace, the letter grade you push will automatically convert to a P or N based on the rules in university regulations.

Q: How do I handle regular incomplete grades?

A: Incomplete grades are assigned when a student has attended class, but has not completed work and has been allowed time to do so. As before, a Registrar Form 60 must completed for each student with an Incomplete or (I) grade submitted..

Incompletes are not to be used for students who never attended class and are still on the class roster. Failure to complete the class or turn in passing coursework is noted as an (F).

Q: How do I know if I should assign an "F" grade or an "FN" grade?

A: A grade of F (Failing) is awarded to students who complete the course and participate in activities through the end of the term but fail to achieve the course objectives. A grade of FN (Failing/Non-authorized Incomplete) is awarded to students who did not officially withdraw from the course, but who failed to participate in course activities through the end of the term. The FN grade is to be used when, in the opinion of the instructor, completed assignments or course activities or both were insufficient to make normal evaluation of academic performance possible. Note that once the FN grade is entered, the instructor is required to indicate the date the student last participated in course activity at an academically related activity, i.e., the last date the student completed an exam, quiz, assignment, paper, project, or attended class (if attendance was taken).

Thu, 26 Feb 2015 14:13:00 -0600 en text/html
Google Reveals Its AI-Powered Search Engine to Answer Your Questions

Microsoft beat Google to the punch with a search engine bolstered by the latest AI technologies. On Wednesday, Google revealed how it's fighting back with artificial intelligence that provides elaborate answers to what you're asking about.

At its Google I/O developer conference, the company will open a waitlist for people in the US to start testing the AI-augmented version of its dominant search in Google's Chrome browser or mobile app. The technology is called Search Generative Experience, or SGE.

Google already uses AI techniques for many search functions, including understanding your search query and assessing the most relevant results. New abilities like large language models and generative AI, though, dramatically expand what's possible with the ability to package information into text written on the fly, which is what SGE tries to accomplish.

Watch this: Google Search Gets New AI Tools

"These new generative AI capabilities make search smarter and search simpler," said Cathy Edwards, a vice president leading Google search, at Google I/O. "It's a new organization of web results, giving you a helping jumping-off point."

It's the biggest example yet of how generative AI is breathing new life into search engine technology that to most folks probably has looked very much the same for several years. Long gone are the days when Google supplied just "10 blue links" pointing to websites, but AI means Google is taking a major step closer to giving you the information you want directly.

How Google's generative AI works

Here's one example of how it could work. If you search for "good bike for a 5-mile commute with hills," Google will combine traditional results with a tinted box to house the generative AI results. After some processing work in Google's data centers, the results arrive: a list of factors like e-bikes and suspension you might consider, some links to related websites, some links to specific bikes, and some suggested followup questions.

After asking about options in red, Google also can show an ad labeled "sponsored" with shopping links. Because yes, Google plans to make money with AI-boosted search.

"We saw that users were coming to us with these very complex problems that might take many, many followup queries sometimes over multiple days," Edwards said. Google sought to reduce the friction of such search engine grunt work, trying to figure out how to get people to what they need in fewer steps.

AI-boosted results will mean people could have less reason to look further than search results, an amplification of Google putting answers like math calculation results, weather forecasts, Wikipedia excerpts and biographical details straight onto the search results page. But Google expects people will want to click through to original sources, especially for complex searches.

But one reason Google is launching the generative search technology through its Search Labs mechanism is to hear what people and businesses on the web think. "We want to get feedback from web publishers and advertisers and make sure that whatever we're building is really thoughtful," Edwards said.

Want an AI chatbot? Look elsewhere

Google is stopping well short of Bing's most famous feature, the chatbot powered by OpenAI's GPT-4 large language model. Google offers a chat interface through its Bard tool, but it's keeping that firmly separate from its search results. There's no hobnobbing with bots, at least for now, and Google reined in its search results with a lighter-weight language model that produces straightforward text, not more creative output.

"We ended up tuning much higher on the factuality side than the fluidity side because we think that's what users expect from Google search," Edwards said.

Another Google search change: Perspectives

When you search on Google, the engine often supplies you with "chips" like shopping, maps, videos, or news that you can click or tap to refine your results. Now there's a new chip coming: perspectives.

With it, Google is trying to spotlight personal experiences related to the search, like forum posts or short videos on social media.

"We know that users really like coming to Google for the authoritative information," Edwards said. "We also know that they're looking for those human voices, those authentic perspectives. The core search results page will have a blend of both."

For more from Google I/O, take a look at the Pixel 7A and Android 14.

Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

Wed, 10 May 2023 15:50:00 -0500 en text/html
The AI revolution already transforming education

When Lauren started researching the British designer Yinka Ilori for a school project earlier this year, she was able to consult her new study pal: artificial intelligence.

After an hour of scouring Google for information, the 16-year-old pupil asked an AI tool called ChatGPT, in which you input a question and get a generated answer, to write a paragraph about Ilori. It replied with fascinating details about the artist’s life that were new and — she later confirmed — factually correct.

“Some of the things it brought up I hadn’t found anywhere online,” says Lauren, a pupil at Wimbledon High School, a private girl’s school in south London. “I was actually surprised about how it was able to deliver me information that wasn’t widely available, and a different perspective.”

Since ChatGPT — a powerful, freely available AI software capable of writing sophisticated responses to prompts — arrived on the scene last year, it has prompted intense speculation about the long-term repercussions on a host of industries and activities.

But nowhere has the impact been felt more immediately than in education. Overnight, rather than labour through traditional exercises designed to develop and assess learning, students could simply instruct a computer to compose essays, answer maths questions or quickly perform complex coursework assignments and pass the results off as their own.

As a result, schools and universities have been forced into a fundamental rethink of how they conduct both tuition and academic testing.

Worries about AI-based plagiarism have pushed a number of institutions to opt for an outright ban of bots like ChatGPT. But enforcing this is difficult, because detecting when the technology has been used is so far unreliable.

Video description

An example of what ChatGPT says when asked about a prominent British designer

ChatGPT writes about British designer Yinka Ilori

An example of what ChatGPT says when asked about a prominent British designer

Given how pervasive the technology already is, some educators are instead moving in the opposite direction and cautiously experimenting with ways to use generative AI to enhance their lessons.

Many students are keen for them to take this approach. For Lauren and her friends, months of playing around with ChatGPT have convinced them there is more to be gained from generative AI than simply cheating. And with the technology threatening to overhaul the jobs market and become a permanent communication tool in everyday lives, they are anxious to be prepared for the turbulence to come.

But these experiments raise the question of whether it is possible to open the door to AI in education without undercutting the most important features of human learning — about what it actually means to be numerate and to be literate.

“We don’t yet understand what generative AI is going to do to our world,” says Conrad Wolfram, the European co-founder of AI-driven research platform Wolfram, who has long pushed for an overhaul of the way maths is taught. “So it’s hard to work out yet how it should affect the content of education.”

AI enters the chat

When ChatGPT was launched by San Francisco-based tech company OpenAI in November 2022, the 300-odd-person team, backed by Microsoft, was expecting it to be a small-scale experiment that would help them build better AI systems in the future. What happened next left them stunned.

Within weeks, ChatGPT, a tool based on software known as a large language model, was being used by more than 100mn people globally. Now, it is being tested inside law firms, management consultancies, news publishers, financial institutions, governments and schools, for mental health therapy and legal advice, to write code, essays and contracts, summarise complex documents, and run online businesses.

For lecturers at the University of Cambridge, the timing of ChatGPT’s launch — as students headed home for Christmas holidays — was convenient.

“We were able to take stock,” says Professor Bhaskar Vira, the university’s pro-vice-chancellor for education. In the discussions that followed, teaching staff observed as other universities took action on ChatGPT, in some cases banning the technology, in others offering students guidance.

By the time students returned, the university had decided a ban would be futile. “We understood it wasn’t feasible,” Vira says. Instead, the university sought to establish fair use guidelines. “We need to have boundaries so they have a very clear idea of what is permitted and not permitted.”

Their assessment was correct. A survey by Cambridge student newspaper Varsity last month found almost half of all students have used ChatGPT to complete their studies. One-fifth used it in work that contributed to their degree and 7 per cent planned to use it in exams. It was the equivalent, said one student, of “dropping one of your cleverer mates a message” asking for help.

Ayushman Nath, a 19-year-old engineering student at Cambridge’s Churchill College, discovered ChatGPT on TikTok like many of his peers. At first, people were posting funny videos of the chatbot telling jokes, but then slowly there was a shift.

Nowadays, Nath says it is common for students to paste in long articles or academic papers and ask for summaries, or to brainstorm ideas on a broad topic. He has used it to research a report on batteries for electric cars, for example. “You can’t use it to replace fundamental knowledge from scientific papers. But it’s really useful for quickly developing a high-level understanding of a complex topic, and coming up with ideas worth exploring,” he says.

However, Nath quickly learned that you cannot trust it to be 100 per cent accurate: “I remember it gave me some stats about electric vehicle batteries, and when I asked for citations, it told me it made them up.”

Video description

An example of how ChatGPT describes EV batteries

ChatGPT writes about electric vehicle batteries

An example of how ChatGPT describes EV batteries

Accuracy is one of the major challenges with generative AI. Language models are known to “hallucinate”, which means they fabricate facts, sources and citations in unpredictable ways as undergraduate Nath discovered.

There is also evidence of bias in AI-written text, including sexism, racism and political partisanship, learned from the corpus of internet data, including social media platforms like Reddit and YouTube, that companies have used to train their systems.

Underpinning this is the “black box” effect, which means it is not clear how AI comes to its conclusions. “It can deliver you false information . . . it’s a vacuum that sucks a bunch of content off the internet and reframes it,” says Jonathan Jones, a history lecturer at the Virginia Military Institute. “We found a lot more myth and memory than hard truths.”

‘There is no going back’

Earlier this year at the Institut auf dem Rosenberg, one of Switzerland’s most elite boarding schools, 12th-grade student Karolina was working on an assignment for her sociolinguistics class. The project was on regional accents in Britain and its effects on people’s social standing and job prospects.

What she handed in was not an essay but a video, featuring an analytical dialogue on the subject between two women in the relevant accents. The script was based on Karolina’s own research. The women were not real: they were avatars generated by Colossyan Creator, AI software from a London-based start-up. “I watched it and I was in awe,” says Anita Gademann, Rosenberg’s director and head of innovation. “It was so much more impactful in making the point.”

Gademann says the school has encouraged students’ use of AI tools, following other qualification bodies including the International Baccalaureate and Wharton, the University of Pennsylvania’s business school. “There is no going back,” she says. “Children are using tech to study and learn, with or without AI.”

Over the past year, the school has observed that students’ assignments have become a lot more visual. Alongside written work, students regularly submit images or videos created by AI-powered art generators like Dall-E or Midjourney. The visuals themselves are a learning opportunity, says Gademann, citing a history class that evaluated anachronisms in AI-generated pictures of the Middle Ages, for instance.

There have been other successes: through repeated use, ChatGPT has improved the writing standard of students who previously struggled. “They are thinkers, they are intelligent, they can analyse, but [putting] something on paper, it’s hard,” Gademann says.

At Rosenberg, roughly 30 per cent of grades are already earned through debate and presentations. Gademann says the advent of generative AI has made it clear that standardised testing models have to change: “If a machine can answer a question, we shouldn’t be asking a human being to answer this same question.”

This overarching dilemma — to what extent assessments should be reshaped for AI — has become a pertinent one. Despite their problems, large language models can already produce university-level essays, and easily pass standardised tests such as the Graduate Management Admission Test (GMAT) and the Graduate Record Examinations (GRE), required for graduate school, as well as the US Medical Licensing Exam.

The software even received a B grade on a core Wharton School MBA course, prompting business school deans across the world to convene emergency faculty meetings on their future.

Earlier this year, Wolfram, the AI pioneer, twinned ChatGPT with a plug in called WolframAlpha, and asked it to sit the maths A-level, England’s standard mathematics qualification for 18-year-olds. The answer engine achieved 96 per cent.

For Wolfram, this was further proof that maths education in the UK, where he is based, is hopelessly behind technological advances, forcing children to spend years learning longhand sums that can be easily done by computers.

Instead, Wolfram argues schools should be teaching “computational literacy”, learning how to solve tricky problems by asking computers complex questions and allowing them to do tedious calculations. This means students can step up “to the next level”, he says, and spend time using more human capabilities, such as being creative or thinking strategically.

Teaching young people to enjoy knowledge, rather than rote learn it, will better prepare children for a future world of work, Wolfram adds, predicting that menial jobs will be automated, while humans take on a higher-skilled supervisory role. “The vocational is the conceptual.”

‘Learning loss’

While AI tools are being rapidly implemented by students, and even integrated into the curriculum at some schools such as Rosenberg, the risks and limitations of the software remain clear.

A coalition of state and private schools in the UK are so concerned about the speed at which AI is developing, they are setting up a cross-sector body to advise “bewildered” educators on how best to use the technology. In a letter to The Times, the group also said they have “no confidence” that large digital companies are capable of regulating themselves.

Anna Mills, a writing instructor at the College of Marin, a community college in California, has spent a year testing language models, the technology underlying ChatGPT, such as OpenAI’s most advanced model GPT-4. Her main concern is that automating young people’s day-to-day lessons by allowing AI to do the legwork could lead to “learning loss”, a decline in essential literacy and numeracy skills.

At Wimbledon High School, where the use of AI is led by Rachel Evans, its director of digital learning and innovation, Lauren’s classmate Olivia has enjoyed using ChatGPT as a “creative spark” but is thinking this risks eroding her own abilities. “When you actually want to start that yourself . . . it’s going to be really challenging if you haven’t had that practice.”

Her friend Rada is less worried. She has found ChatGPT unreliable for giving answers, but useful for helping to structure her arguments. “It’s not good at answers, but it’s good at ‘flufferising’ them,” she says, referring to the chatbot’s ability to turn rough ideas into something more digestible.

Mills agrees that AI-produced essays are often articulate and well-structured, but they can lack originality and ideas. That, she says, should force educators to interrogate what students should get from essay tasks. “We assign writing because we think it helps people learn to think. Not to create more student essays,” she adds. “It’s the mainstay process that academia has developed to help people think and communicate and get further in their understanding. We want students to engage in that.”

Senior leaders at the Harris Federation, which runs 52 state-funded primary and secondary schools in London, are excited about the potential for generative AI to help students with research as well as freeing up teachers’ time by generating lesson plans or marking work.

Yet the federation’s chief executive, Sir Dan Moynihan, is concerned the technology could present an “equity issue”. Not only may poorer students struggle to access paid-for AI technology that will make work easier, he says, schools with tight budgets may use AI to cut corners in a way that is not necessarily the best for learning.

“I’m not a pessimist, but we have to collectively avoid this becoming a dystopian thing,” says Moynihan. “We need to make sure we don’t end up with AI working with large numbers of kids [and] teachers acting as pastoral support, or maintaining discipline.”

Life-changing technology

However, there are those who point out that educators are only just beginning to think of ways it might be used in classrooms.

In September 2022, entrepreneur Sal Khan, the founder of Khan Academy, a non-profit whose free online tutorials are viewed by millions of children globally, was approached by OpenAI to test out its new model GPT-4, which underpins the paid-for version of ChatGPT.

After Khan, who also runs a bricks-and-mortar private school in the heart of Silicon Valley, spent a weekend playing with it, he realised it was not just about producing answers: GPT-4 could provide rationales, prompt the student in a Socratic way and even write its own questions. “I always thought it would be 10-20 years before we could even hope to deliver every student an on-demand tutor,” says Khan. “But then I was like, wow, this could be months away.”

By March, a model from Khan’s team had gone from “almost nothing to a fairly compelling tutor”, called Khanmigo. Khan pays OpenAI a fee to cover the computational cost of running the AI system, roughly $9-$10 per month per user.

The AI tutor uses GPT-4 to debate with students, coach them on subjects ranging from physics and English, and answer questions as pupils complete tutorials. Asking the software to provide an explanation for its answers increases its accuracy and improves the lesson, he says. The product is being rolled out to hundreds of teachers and children across Khan’s physical and virtual schools, and up to 100,000 pupils across 500 US school districts partnered with Khan Academy will access it by the end of 2023.

Khan describes ChatGPT as the gateway to a “very powerful technology” that can be misused. However, if it is adapted to be “pedagogically sound, with clear oversight and moderation filters” language models can be revolutionary.

“I don’t say lightly, I think it’s probably the biggest transformation of our life . . . especially in education,” Khan says. “You’re going to be able to awaken people’s curiosity, get them excited about learning. They’re going to have an infinitely patient tutor with them, always.”

Back in Wimbledon, Lauren and her classmates are becoming aware that generative AI, while useful, is no substitute for some of the most important and rewarding parts of the learning process.

“One of our main takeaways was the importance of being stuck,” says Lauren. “Generally in life you need to be able to overcome little hurdles to feel proud of your work.”

“It’s so vital not to ban the use of it in education, but instead . . . learn how to use it through proper, critical thinking,” her classmate Olivia adds. “Because it will be a tool in our futures.”

Sat, 20 May 2023 16:00:00 -0500 en-GB text/html
AI is boosting productivity & employee engagement for organisations, says Bhaskar Basu from Microsoft India © Provided by The Indian Express

Artificial Intelligence is the buzzword of the year. Even as companies around the world are adopting various AI-backed technologies, tech giant Microsoft has integrated it into a variety of tools that are essential for organisational growth. These tools are mostly aimed at boosting productivity and helping millions of professionals to organise their work in the most efficient manner.

Among the deluge of AI-powered tools, Microsoft recently launched its Copilot in Microsoft Viva and Microsoft Viva Glint to assist organisations in creating a more ‘engaged and productive’ workforce. Viva harnesses next-generation AI to accelerate performance, engagement and productivity. With AI, Microsoft is not only amplifying productivity but is also offering a platform for leaders and employees to engage in a more meaningful way.

The got in touch with Bhaskar Basu, Country Head - Modern Work, Microsoft India recently. In view of Microsoft’s latest announcements, Basu spoke at length about how AI is transforming the workspace and some insights on the Viva Copilot.

Below is a glimpse of the questions and answer session with Basu:

What are your thoughts on the AI momentum and what do you mean by AI as a Copilot?

There is a tremendous amount of curiosity, excitement and a sense of expectation to demystify trends and technologies related to generative AI. We are hearing about Dall-E, ChatGPT, Viva, Microsoft Copilot, etc. I am extremely excited about where we find ourselves today and I truly believe that we are in an entirely new era of computing. There is a lot that is being said about the emergence of something we know as ‘powerful new foundation models’ and accessible natural language interfaces. This is indeed an exciting new phase of AI.

ICYMI | 90% of leaders want to hire staff equipped with AI know-how: Microsoft report

With the current generation of AI, it is evident that we are shifting from autopilot to copilot mode. Here, I use the term "copilot" as an analogy to an aeroplane. In this analogy, the pilot charts the course, navigates, provides directions, and handles takeoff and landing. The pilot's clear mission is to safely transport passengers from point A to point B. The idea of a copilot in AI is to act as a companion that alleviates the burden of repetitive tasks and becomes an ally to the pilot. The copilot's role is to support the pilot in achieving their vision, strategy, and overall purpose. Hence, the term "copilot" represents this concept of an AI ally.

How has AI been shaping at Microsoft?

Microsoft has been an AI company for a long time, and recently we have made significant strategic investments to enhance our copilot capabilities. Along with Viva, we have introduced various offerings such as Azure Open AI Services, GitHub Copilot, Dynamics 365 Copilot, security copilot, and Microsoft 365 Copilot. These initiatives bring together a range of technologies, including cognitive services, machine learning, digital twins, and deep learning. Our goal is to incorporate generative AI capabilities into multiple consumer and commercial products, simplifying lives and unlocking creativity, productivity, and overall capability. We are thrilled about the possibilities and eager to push the boundaries of what AI can achieve. It's an exciting time for us.

How do you think Microsoft 365 will help in eliminating repetitive and mundane tasks, allowing staff to dedicate more time to meaningful work?

We are a highly productive organisation and Microsoft 365 applications such as Word, Excel, Outlook, PowerPoint, Teams, etc., are key to unlocking productivity. In the last decade, we have integrated AI powers such as predictive responses in Teams and voice prompts in Outlook. The inflection point for core productivity came with the introduction of the M365 copilot, which aims to reimagine the productivity experience by leveraging human words. The copilot combines the Microsoft 365 apps, the Microsoft Graph (which stores contextual signals), and a Large Language Model (LLM) capable of processing natural language.

Also Read | Generative AI for office productivity: A comparison of Google and Microsoft’s offerings

The copilot system understands your requests, applies privacy and security principles, and generates simplified responses to enhance task completion. This copilot approach simplifies productivity by condensing content, converting documents, and generating images through text input. It's a confluence of apps, the Microsoft Graph, and the LLM, providing easy-to-action text. This is the Microsoft 365 copilot.

How do Viva and its features enable effective employee engagement?

Over the past few years, the Covid-19 pandemic has changed the way we work and highlighted the importance of employee experience. Organisations have realised the significance of focussing on connection, insight, purpose, and growth to enhance employee engagement. In response to these insights, Microsoft developed Viva, a comprehensive platform that addresses these objectives. Viva leverages AI and machine learning capabilities to deliver some valuable features.

Insights, for example, allows managers to gain qualitative input from their team members, analyse trends, and make informed decisions. Viva courses simplifies content discovery by scanning the network, aggregating relevant information, and connecting people working on similar topics. Viva Learning offers personalised learning pathways, that align with employees' career goals and development needs. Glint, which will be integrated into Viva, helps organisations listen to employee feedback and take action to Improve the employee experience. Lastly, Viva Goals provides clarity and alignment by defining individual and team goals and driving purposeful actions.

ICYMI | Microsoft Build 2023: From Bing for ChatGPT to AI innovations on Edge, top developments

These capabilities within Viva are designed to enhance productivity, efficiency, communication, and employee experience. They leverage technology to simplify tasks, provide insights, and offer personalised experiences. Microsoft recognises the importance of continuously evolving these features to meet the evolving needs and expectations of employees.

How does Viva Glint simplify the process of feedback and how does it simplify the overall employee experience journey for both leaders and staff?

Glint aims to consistently assess and Improve employee engagement by seeking and acting on feedback. Copilot and Viva Glint assist leaders in summarising and analysing large volumes of employee comments, saving significant time. By engaging with the copilot through simple written language, leaders can request pain point summaries or actionable intervention recommendations. Connecting this capability with objectives and key responsibilities (OKRs) allows leaders to develop business and people strategies based on feedback outcomes. Glint simplifies the leader's journey by correlating data, providing summaries, and offering actionable insights. Microsoft plans to incorporate copilot capabilities across other modules in Viva, aiming to streamline the overall employee experience for both leaders and employees.

Also Read | Microsoft Loop is here: What is it and can it take on popular productivity tools?

How is Microsoft safeguarding employee data and privacy with all these AI interventions in its applications? 

Responsible and ethical AI is a priority for Microsoft which also lays a strong emphasis on privacy, compliance and security. Copilot adheres to organisation-level security, compliance, and privacy policies within Microsoft 365. It ensures that AI assimilates and renders content only to users who have proper access, respecting existing rights controls. Individual users can access data based on their permissions, and personal data is not used to train or Improve language models. Insights and reports provided to leaders are aggregated and de-identified to protect privacy. Microsoft employs various principles and technologies to enforce these fundamentals.

How do you think AI will impact the role of leaders going forward?

Our research indicates that leaders are increasingly adopting AI and anticipating greater capabilities. This aligns with the idea of using AI as an ally, a companion, and a co-pilot. Personally, I am extremely excited about this capability. Early preview demos have shown me the potential to unlock productivity and simplify everyday tasks using natural language. As a leader, I can offload the learning curve, focus on strategic thinking, and make myself more productive. Enterprise AI is disrupting workplaces, offering real-time insights, well-being monitoring, and the ability to set team objectives. This technology presents a compelling opportunity for organisations and individuals to embrace and shape it as an ally in achieving their objectives.

Mon, 05 Jun 2023 10:41:00 -0500 en-IN text/html
Are AI Stocks in a Bubble That's About to Burst? No result found, try new keyword!And now the frenzy over artificial intelligence (AI) advances is sending several stocks into the stratosphere. Are AI stocks in a bubble that's about to burst? Image source: . The definition of a ... Sun, 21 May 2023 21:50:00 -0500 text/html Google expected to unveil its answer to Microsoft's AI search challenge

By Jeffrey Dastin

May 10 (Reuters) - Alphabet Inc's Google on Wednesday is expected to unveil more artificial intelligence in its products to answer the latest competition from Microsoft Corp, which has threatened its perch atop the nearly $300-billion search advertising market.

Through an internal project code-named Magi, Google has looked to infuse its namesake engine with generative artificial intelligence, technology that can answer questions with human-like prose and derive new content from past data.

The effort will be the most closely watched as Google executives take the stage at its yearly conference I/O in Mountain View, California, near its headquarters. The result could alter how consumers access the world's information and which company wins the global market for search advertising, estimated by research firm MAGNA to be $286 billion this year.

For years the top portal to the internet, Google has found its position in question since rivals began exploiting generative AI as an alternative way to present content from the web.

First came ChatGPT, the chatbot from Microsoft-backed OpenAI that industry observers called Google's disruptor. Next came Bing, Microsoft's search engine updated with a similarly dextrous chatbot, which can answer queries where no obvious result existed online -- like what car seat to buy for a particular model vehicle.

Microsoft last month touted U.S. share gains for Bing, recently growing to more than 100 million daily active users, still dwarfed by billions of searches on Google.

Google's rivals have taken its research breakthroughs from prior years and run with them, outpacing their inventor. That has represented a technological affront and a business one: Microsoft said every percentage point of share it gained in search advertising could draw another $2 billion in revenue.

For months now, teams at Google have sprinted to release technology at I/O or prior, like its ChatGPT competitor Bard, defending the company's turf.

Sundar Pichai, Alphabet's chief executive, this year said generative AI to distill complex queries would come to Google Search, as would more perspectives, "like blogs from people who play both piano and guitar."

Google is also seeking to restate its research mantle. At Wednesday's conference, it is expected to announce a more powerful AI model known as PaLM 2, CNBC reported.

It is also expected to showcase new hardware for its lineup of Pixel devices, media have reported. (Reporting By Jeffrey Dastin in San Francisco; Additional reporting by Sheila Dang; Editing by David Gregorio)

Wed, 10 May 2023 00:00:00 -0500 en-US text/html

MS-900 tricks | MS-900 plan | MS-900 test prep | MS-900 techniques | MS-900 mock | MS-900 Topics | MS-900 information hunger | MS-900 tricks | MS-900 questions | MS-900 PDF Download |

Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List
Search Exams
MS-900 exam dump and training guide direct download
Training Exams List