killexams.com offers you go through its demo version, Test our exam simulator that will enable you to experience the real test environment. Passing real DP-203 exam will be much easier for you. killexams.com gives you 3 months free updates of DP-203 DP-203 dumps with real questions. Our certification team is continuously reachable at back end who updates the material as and when required.
DP-203 Dumps
DP-203 Braindumps
DP-203 Real Questions
DP-203 Practice Test
DP-203 dumps free
Microsoft
DP-203
Data Engineering on Microsoft Azure
http://killexams.com/pass4sure/exam-detail/DP-203 Question: 92
HOTSPOT
You need to design an analytical storage solution for the transactional data. The solution must meet the sales
transaction dataset requirements.
What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each
correct selection is worth one point. Answer:
Explanation:
Graphical user
interface, text, application, table
Description automatically generated
Box 1: Round-robin
Round-robin tables are useful for improving loading speed.
Scenario: Partition data that contains sales transaction records. Partitions must be designed to provide efficient loads
by month.
Box 2: Hash
Hash-distributed tables Boost query performance on large fact tables. Question: 93
You have an Azure data factory.
You need to examine the pipeline failures from the last 180 flays.
What should you use?
A. the Activity tog blade for the Data Factory resource
B. Azure Data Factory activity runs in Azure Monitor
C. Pipeline runs in the Azure Data Factory user experience
D. the Resource health blade for the Data Factory resource Answer: B
Explanation:
Data Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you want to keep that data for a longer
time.
Reference: https://docs.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor Question: 94
HOTSPOT
You build an Azure Data Factory pipeline to move data from an Azure Data Lake Storage Gen2 container to a
database in an Azure Synapse Analytics dedicated SQL pool.
Data in the container is stored in the following folder structure.
/in/{YYYY}/{MM}/{DD}/{HH}/{mm}
The earliest folder is /in/2021/01/01/00/00. The latest folder is /in/2021/01/15/01/45.
You need to configure a pipeline trigger to meet the following requirements:
Existing data must be loaded.
Data must be loaded every 30 minutes.
Late-arriving data of up to two minutes must he included in the load for the time at which the data should have
arrived.
How should you configure the pipeline trigger? To answer, select the appropriate options in the answer area. NOTE:
Each correct selection is worth one point. Answer:
Explanation:
Box 1: Tumbling window
To be able to use the Delay parameter we select Tumbling window.
Box 2:
Recurrence: 30 minutes, not 32 minutes
Delay: 2 minutes.
The amount of time to delay the start of data processing for the window. The pipeline run is started after the expected
execution time plus the amount of delay. The delay defines how long the trigger waits past the due time before
triggering a new run. The delay doesn’t alter the window startTime. Question: 95
HOTSPOT
You need to design a data ingestion and storage solution for the Twitter feeds. The solution must meet the customer
sentiment analytics requirements.
What should you include in the solution? To answer, select the appropriate options in the answer area. NOTE: Each
correct selection b worth one point. Answer:
Explanation:
Graphical user interface, text
Description automatically generated
Box 1: Configure Evegent Hubs partitions
Scenario: Maximize the throughput of ingesting Twitter feeds from Event Hubs to Azure Storage without purchasing
additional throughput or capacity units.
Event Hubs is designed to help with processing of large volumes of events. Event Hubs throughput is scaled by using
partitions and throughput-unit allocations.
Event Hubs traffic is controlled by TUs (standard tier). Auto-inflate enables you to start small with the minimum
required TUs you choose. The feature then scales automatically to the maximum limit of TUs you need, depending on
the increase in your traffic.
Box 2: An Azure Data Lake Storage Gen2 account
Scenario: Ensure that the data store supports Azure AD-based access control down to the object level.
Azure Data Lake Storage Gen2 implements an access control model that supports both Azure role-based access control
(Azure RBAC) and POSIX-like access control lists (ACLs). Question: 96
You have an Azure Stream Analytics query. The query returns a result set that contains 10,000 distinct values for a
column named clusterID.
You monitor the Stream Analytics job and discover high latency.
You need to reduce the latency.
Which two actions should you perform? Each correct answer presents a complete solution. NOTE: Each correct
selection is worth one point.
A. Add a pass-through query.
B. Add a temporal analytic function.
C. Scale out the query by using PARTITION BY.
D. Convert the query to a reference query.
E. Increase the number of streaming units. Answer: C,E
Explanation:
C: Scaling a Stream Analytics job takes advantage of partitions in the input or output. Partitioning lets you divide data
into subsets based on a partition key. A process that consumes the data (such as a Streaming Analytics job) can
consume and write different partitions in parallel, which increases throughput.
E: Streaming Units (SUs) represents the computing resources that are allocated to execute a Stream Analytics
job. The higher the number of SUs, the more CPU and memory resources are allocated for your job. This capacity lets
you focus on the query logic and abstracts the need to manage the hardware to run your Stream Analytics job in a
timely manner.
References:
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parallelization
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-streaming-unit-consumption Question: 97
HOTSPOT
You have an Azure subscription.
You need to deploy an Azure Data Lake Storage Gen2 Premium account.
The solution must meet the following requirements:
• Blobs that are older than 365 days must be deleted.
• Administrator efforts must be minimized.
• Costs must be minimized
What should you use? To answer, select the appropriate options in the answer area. NOTE Each correct selection is
worth one point. Answer:
Explanation:
https://learn.microsoft.com/en-us/azure/storage/blobs/premium-tier-for-data-lake-storage Question: 98
DRAG DROP
You need to ensure that the Twitter feed data can be analyzed in the dedicated SQL pool.
The solution must meet the customer sentiment analytics requirements.
Which three Transaction-SQL DDL commands should you run in sequence? To answer, move the appropriate
commands from the list of commands to the answer area and arrange them in the correct order. NOTE: More than one
order of answer choices is correct. You will receive credit for any of the correct orders you select. Answer:
Explanation:
Scenario: Allow Contoso users to use PolyBase in an Azure Synapse Analytics dedicated SQL pool to query the
content of the data records that host the Twitter feeds. Data must be protected by using row-level security (RLS). The
users must be authenticated by using their own Azure AD credentials.
Box 1: CREATE EXTERNAL DATA SOURCE
External data sources are used to connect to storage accounts.
Box 2: CREATE EXTERNAL FILE FORMAT
CREATE EXTERNAL FILE FORMAT creates an external file format object that defines external data stored in
Azure Blob Storage or Azure Data Lake Storage. Creating an external file format is a prerequisite for creating an
external table.
Box 3: CREATE EXTERNAL TABLE AS SELECT
When used in conjunction with the CREATE TABLE AS SELECT statement, selecting from an external table imports
data into a table within the SQL pool. In addition to the COPY statement, external tables are useful for loading data. Question: 99
DRAG DROP
You have the following table named Employees.
You need to calculate the employee_type value based on the hire_date value.
How should you complete the Transact-SQL statement? To answer, drag the appropriate values to the correct targets.
Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or
scroll to view content. NOTE: Each correct selection is worth one point. Answer:
Explanation:
Graphical user
interface, text, application
Description automatically generated
Box 1: CASE
CASE evaluates a list of conditions and returns one of multiple possible result expressions.
CASE can be used in any statement or clause that allows a valid expression. For example, you can use CASE in
statements such as SELECT, UPDATE, DELETE and SET, and in clauses such as select_list, IN, WHERE, ORDER
BY, and HAVING.
Syntax: Simple CASE expression:
CASE input_expression
WHEN when_expression THEN result_expression [ …n ] [ ELSE else_result_expression ] END
Box 2: ELSE Question: 100
HOTSPOT
You are building a database in an Azure Synapse Analytics serverless SQL pool.
You have data stored in Parquet files in an Azure Data Lake Storage Gen2 container.
Records are structured as shown in the following sample.
{
"id": 123,
"address_housenumber": "19c",
"address_line": "Memory Lane",
"applicant1_name": "Jane",
"applicant2_name": "Dev"
}
The records contain two applicants at most.
You need to build a table that includes only the address fields.
How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point. Answer:
Explanation:
Box 1: CREATE EXTERNAL TABLE
An external table points to data located in Hadoop, Azure Storage blob, or Azure Data Lake Storage. External tables
are used to read data from files or write data to files in Azure Storage. With Synapse SQL, you can use external tables
to read external data using dedicated SQL pool or serverless SQL pool.
Syntax:
CREATE EXTERNAL TABLE { database_name.schema_name.table_name | schema_name.table_name | table_name
} ( [ ,…n ] )
WITH (
LOCATION = ‘folder_or_filepath’,
DATA_SOURCE = external_data_source_name, FILE_FORMAT = external_file_format_name
Box 2. OPENROWSET
When using serverless SQL pool, CETAS is used to create an external table and export query results to Azure Storage
Blob or Azure Data Lake Storage Gen2.
Example:
AS
SELECT decennialTime, stateName, SUM(population) AS population
FROM
OPENROWSET(BULK
‘https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_pop
ulation_county/year=*/*.parquet’,
FORMAT=’PARQUET’) AS [r]
GROUP BY decennialTime, stateName
GO
For More exams visit https://killexams.com/vendors-exam-list
Kill your exam at First Attempt....Guaranteed!
A Microsoft Azure DevOps outage in the South Brazil Region, which lasted over 10 hours, was caused thanks to a typo in the code that saw 17 production databases deleted.
Having apologized to impacted customers for the outage, Microsoft has now issued a full post-mortem, sharing details about the investigation that took place from when the outage was first noticed at 12:10 UTC on May 24, until its remedy at 22:31 UTC on the same day.
Microsoft principal software engineering manager Eric Mattingly shared details of the code base upgrade which formed part of Sprint 222. Inside the pull request was a hidden typo bug in the snapshot deletion job, which ended up deleting the Azure SQL Server rather than the individual Azure SQL Database.
Mattingly explained: “when the job deleted the Azure SQL Server, it also deleted all seventeen production databases for the scale unit,” confirming that no data had been lost during the accidental process.
The outage was detected within 20 minutes, at which point the company’s on-call engineers got to work, however according to the event log the root cause was identified at 16:04, almost four hours after the outage had begun.
Microsoft blamed the over ten-hour fix time on the fact that customers themselves are unable to restore Azure SQL Servers, as well as backup redundancy complications and a “complex set of issues with [its] web servers.”
Having learned from its mistake, Microsoft has no promised to roll out Azure Resource Manager Locks to its key resources, in an effort to prevent future accidental deletion.Â
Despite a same-day fix, customers in the region were left without access to some services for several hours, emphasizing how easy it is for things to go wrong and the importance of having backup plans to reduce reliance on single service providers, including cloud storage and other off-prem infrastructure.
Sun, 04 Jun 2023 21:00:08 -0500en-UStext/htmlhttps://www.msn.com/en-us/news/technology/major-microsoft-azure-outage-was-caused-by-a-simple-typo/ar-AA1c9fwiMicrosoft Azure outage in Brazil caused by typoJust a moment...
Sun, 04 Jun 2023 22:25:00 -0500en-UStext/htmlhttps://www.datacenterdynamics.com/en/news/microsoft-azure-outage-in-brazil-caused-by-typo/Serverless is the future of PostgreSQL
PostgreSQL has been hot for years, but that hotness can also be a challenge for enterprises looking to pick between a host of competing vendors. As enterprises look to move off expensive, legacy relational database management systems (RDBMS) but still want to stick with an RDBMS, open source PostgreSQL is an attractive, less-expensive alternative. But which PostgreSQL? AWS was once the obvious default with two managed PostgreSQL services (Aurora and RDS), but now there’s Microsoft, Google, Aiven, TimeScale, Crunchy Data, EDB, Neon, and more.
In an interview with the founder and CEO of Neon Nikita Shamgunov, he stressed that among this crowd of pretenders to the PostgreSQL throne, the key differentiator going forward is serverless. “We are serverless, and all the other ones except for Aurora, which has a serverless option, are not,” he declares. If he’s right about the importance of serverless for PostgreSQL adoption, it’s possible the future of commercial PostgreSQL could come down to a serverless battle between Neon and AWS.
Ditch those servers
In some ways, serverless is the fulfillment of cloud’s promise. Almost since the day it started, for example, AWS has pitched the cloud as a way to offload the “undifferentiated heavy lifting” of managing servers, yet even with services like Amazon EC2 or Amazon RDS for PostgreSQL, developers still had to think about servers, even if there was much less work involved.
In a truly serverless world, developers don’t have to think about the underlying infrastructure (servers) at all. They just focus on building their applications while the cloud provider takes care of provisioning servers. In the world of databases, a truly serverless offering will separate storage and compute, and substitute the database’s storage layer by redistributing data across a cluster of nodes.
Among other benefits of serverless, as Anna Geller, Kestra’s head of developer relations, explains, serverless encourages useful engineering practices. For example, if we can agree that “it’s beneficial to build individual software components in such a way that they are responsible for only one thing,” she notes, then serverless helps because it “encourages code that is easy to change and stateless.” Serverless all but forces a developer to build reproducible code. She says, “Serverless doesn’t only force you to make your components small, but it also requires that you define all resources needed for the execution of your function or container.”
The result: better engineering practices and much faster development times, as many companies are discovering. In short, there is a lot to love about serverless.
Shamgunov sees two primary benefits to running PostgreSQL serverless. The first is that developers no longer need to worry about sizing. All the developer needs is a connection string to the database without worrying about size/scale. Neon takes care of that completely. The second benefit is consumption-based pricing, with the ability to scale down to zero (and pay zero). This ability to scale to zero is something that AWS doesn’t offer, according to Ampt CEO Jeremy Daly. Even when your app is sitting idle, you’re going to pay.
But not with Neon. As Shamgunov stresses in our interview, “In the SQL world, making it truly serverless is very, very hard. There are shades of gray” in terms of how companies try to deliver that serverless promise of scaling to zero, but only Neon currently can do so, he says.
Do people care? The answer is yes, he insists. “What we’ve learned so far is that people really care about manageability, and that’s where serverless is the obvious winner. [It makes] consumption so easy. All you need to manage is a connection stream.” This becomes increasingly important as companies build ever-bigger systems with “bigger and bigger fleets.” Here, “It’s a lot easier to not worry about how big [your] compute [is] at a point in time.” In other systems, you end up with runaway costs unless you’re focused on dialing resources up or down, with a constant need to size your workloads. But not in a fully serverless offering like Neon, Shamgunov argues. “Just a connection stream and off you go. People love that.”
Making the most of serverless
Not everything is rosy in serverless land. Take cold starts, for example. The first time you invoke a function, the serverless system must initialize a new container to run your code. This takes time and is called a “cold start.” Neon has been “putting in a non-trivial amount of engineering budget to solving the cold-start problem,” Shamgunov says. This follows a host of other performance improvements the company has made, such as speeding up PostgreSQL connections.
Neon also uniquely offers branching. As Shamgunov explains, Neon supports copy-on-write branching, which “allows people to run a dedicated database for every preview or every GitHub commit. This means developers can branch a database, which creates a full copy of the data and gives developers a separate serverless endpoint to it. You can run your CI/CD pipeline, you can test it, you can do capacity or all sorts of things, and then bring it back into your main branch. If you don’t use the branch, you spend $0. Because it’s serverless. Truly serverless.
All of which helps Neon deliver on its promise of “being as easy to consume as Stripe,” in Shamgunov’s words. To win the PostgreSQL battle, he continues, “You need to be as developer-friendly as Stripe.” You need, in short, to be serverless.
Sun, 04 Jun 2023 20:59:00 -0500entext/htmlhttps://www.infoworld.com/article/3698688/serverless-is-the-future-of-postgresql.htmlMicrosoft warns of Volt Typhoon, latest salvo in global cyberwarImage: pinkeyes/Adobe Stock
Microsoft’s warning on Wednesday that the China-sponsored actor Volt Typhoon attacked U.S. infrastructure put a hard emphasis on presentations by cybersecurity and international affairs experts that a global war in cyberspace is pitting authoritarian regimes against democracies.
Jump to:
China’s commitment to cyberwarfare
Microsoft’s notification pointed out that Volt Typhoon — which hit organizations in sectors spanning IT, communications, manufacturing, utility, transportation, construction, maritime, government and education — has been pursuing a “living off the land” strategy focused on data exfiltration since 2021. The tactic typically uses social engineering exploits like phishing to access networks invisibly by riding on legitimate software. It uses a Fortinet exploit to gain access and uses valid accounts to persist (Figure A).
Figure A
Volt Typhoon attack diagram. Image: Microsoft
Nadir Izrael, the chief technology officer and co-founder of the Armis security firm, pointed out that China’s defense budget has been increasing over the years, reaching an estimated $178 billion in 2020. “This growing investment has enabled China to build up its cyber capabilities, with more than 50,000 cyber soldiers and an advanced cyberwarfare unit,” he said.
He added that China’s investment in offensive cyber capabilities has created “a global weapon in its arsenal to rattle critical infrastructure across nearly every sector — from communications to maritime — and interrupt U.S. citizens’ lives.” He said, “Cyberwarfare is an incredibly impactful, cost-effective tool for China to disrupt world order.”
“As the world becomes increasingly digitized, cyberwarfare is modern warfare,” Armis said. “This has to be a wake-up call for the U.S. and western nations.”
At the WithSecure Sphere23 conference in Helsinki, Finland, before this security news had crossed the wires, Jessica Berlin, a Germany-based foreign policy analyst and founder of the consultancy CoStruct, said the U.S., the European Union and other democracies have not awakened to the implications of cyberwarfare by Russia, China and North Korea. She said these countries are engaged in a cybernetic world war — one that autocracies have the upper hand in because they have fully acknowledged and embraced it and have committed to waging it as such.
She told TechRepublic that tech and security companies could play a key role in awakening citizens and governments to this fact by being more transparent about attacks. She also noted the European Union’s General Data Protection Regulation, which has been in effect for five years, has been a powerful tool for oversight of digital information, data provenance and misinformation on social platforms.
Professionalization of cybercrime lowers bar to entry
Stephen Robinson, a senior threat intelligence analyst at WithSecure, said the cybercriminal ecosystem’s mirroring of legitimate business has made it easier for state actors and less sophisticated groups to buy what they can’t make. This professionalization of cybercrime has created a formal service sector. “They are outsourcing functions, hiring freelancers, subcontracting; criminal service providers have sprung up, and their existence is industrializing exploitation,” said Robinson.
The success of the criminal as-a-service model is expedited by such frameworks as Tor anonymous data transfer and cryptocurrency, noted Robinson, who delineated some dark web service verticals.
Initial access brokers: These brokers are key because they thrive in the service-oriented model and are enablers. They use whatever method they can to gain access and then offer that access.
Crypter as a service: Crypter is a tool to hide a malware payload. And this, said Robinson, has led to an arms race between malware and antimalware.
Crypto jackers: These actors break into a network and drop software and are often one of the first actors to exploit a server vulnerability. They constitute a low threat yet are a very strong indicator that something has happened or will, according to Robinson.
Malware-as-a-service: Highly technical and with advanced services like support and contracts and access to premium products.
Nation state actors: Nation state actors use the above tools, which enable them to spin up campaigns and access new victims without being attributed.
The firm’s analysis of more than 3,000 data leaks by these groups showed that organizations in the U.S. were the most targeted victims, followed by Canada, the U.K., Germany, France and Australia.
In addition, the firm’s research showed that the construction industry accounted for 19% of the data leaks; the automotive industry accounted for only 6% of attacks.
“In pursuit of a bigger slice of the huge revenues of the ransomware industry, ransomware groups purchase capabilities from specialist e-crime suppliers in much the same way that legitimate businesses outsource functions to increase their profits,” said Robinson. “This ready supply of capabilities and information is being taken advantage of by more and more cyberthreat actors, ranging from lone, low-skilled operators right up to nation state APTs. Ransomware didn’t create the cybercrime industry, but it has really thrown fuel on the fire.”
The firm offered an example that resembled the mass looting of a department store after the door had been left ajar. One organization was victimized by five threat actors, each with different objectives and representing a different type of cybercrime service: the Monti ransomware group, Qakbot malware-as-a-service, the 8220 crypto-jacking gang, an unnamed initial access broker and a subset of Lazarus Group associated with North Korea.
In these incidents, WithSecure threat intelligence reported encountering six distinct examples of the “as a service” model in use in the kill chains observed (Figure B).
Figure B
Six “as a service” models. Image: WithSecure
According to the report, this professionalization trend makes the expertise and resources to attack organizations accessible to lesser-skilled or poorly resourced threat actors. The report predicts it is likely the number of attackers and the size of the cybercrime industry will grow in the coming years.
How to mitigate Volt Typhoon
In Microsoft’s report about Volt Typhoon, the company said detecting an activity that uses normal sign-in channels and system binaries requires behavioral monitoring and remediation requires closing or changing credentials for compromised accounts. In these cases, Microsoft suggests that security operations teams should examine the activity of compromised accounts for any malicious actions or exposed data.
To preclude this variety of attacks, Microsoft suggested these tips:
Enforce strong multifactor authentication policies by using hardware security keys, passwordless sign-in and password expiration rules and deactivating unused accounts.
Turn on attack surface reduction rules to block or audit activities associated with this threat.
Enable Protective Process Light for LSASS on Windows 11 devices. New enterprise-joined Windows 11 (22H2 update) installs have this feature enabled by default, per the company.
Enable Windows Defender Credential Guard, which is turned on by default for organizations using the Enterprise edition of Windows 11.
Thu, 25 May 2023 15:57:00 -0500en-UStext/htmlhttps://www.techrepublic.com/article/volt-typhoon-global-cyberwar/ZIRO Announces New Version of ZPC
Automate Routine Cisco UC Tasks or Offload Them to Your Helpdesk
LAS VEGAS - June 5, 2023 - PRLog -- ZIRO, a leading provider of unified communication managed services for Cisco and Microsoft, today at Cisco Live announced a new version of the ZIRO Platform for Cisco (ZPC), which automates routine UC tasks and offloads others, such as provisioning, to help desk personnel. The new ZPC features include provisioning Cisco's WebEx Calling and Packaged Contact Center Enterprise (PCCE) 4K and 12K offerings and support of a single pane of glass for Cisco UC and Microsoft Teams Calling.
Cisco Webex Calling offers mid-sized businesses an affordable cloud-based phone system. ZPC makes it easy for your help desk to provision WebEx calling to employees throughout your organization without the assistance of UC engineering resources.
Cisco's Packaged Contact Center Enterprise (Packaged CCE or PCCE) provides mid-size companies with an enterprise-class omnichannel contact center solution at an affordable price. ZPC's support of PCCE 4K & 12K makes deploying thousands of call center agents easy to provision and administer.
The emergence of Microsoft Teams Calling as a viable solution for mid-size and large enterprises has many companies looking to support hybrid UC environments. For customers supporting Cisco UC and Microsoft Teams Calling, ZPC now offers the ability to manage both solutions from one application. With ZPC ZIRO, customers can consolidate management tools and get more productivity from their helpdesk and IT personnel.
For more information about ZIRO's ZPC and how it can revolutionize UC management, visit our ZPC page or goziro.com.
Mon, 05 Jun 2023 09:00:00 -0500entext/htmlhttp://markets.buffalonews.com/buffnews/article/prlog-2023-6-5-ziro-announces-new-version-of-zpcMicrosoft debuts Fabric, a single, integrated data analytics platform for AI and business
Microsoft Corp. today debuted a new and integrated data analytics platform called Microsoft Fabric that brings together all of the data and analytics tools an organization needs to build the foundation of artificial intelligence.
The platform was announced at Microsoft’s annual developer conference Build 2023 running today through Wednesday in Seattle. It integrates platforms such as Data Factory, Synapse and Power BI into a single, unified software-as-a-service product.
According to Microsoft, it will replace those disparate systems with a simpler, easier to manage and cost-effective integrated platform for companies looking to build and integrate AI into their technology stacks. It bundles all of the tools required by data professionals in one place, including data integration, data engineering, data warehousing, data science, real-time analytics, applied observability and business intelligence.
Microsoft said Fabric will make life much easier in a world that’s awash in data that’s generated by people’s devices, applications and interactions. Although organizations have already effectively harnessed much of this data to transform digitally and gain competitive advantages, there’s a need to simplify things as generative AI and large language models rise to the fore.
Services such as Azure OpenAI have enabled companies to create all manner of cutting edge AI experiences to make people more productive. But building such experiences is challenging as it requires a steady stream of clean data and a highly integrated analytics system. Most companies lack this, and instead have to contend with a labyrinth of disconnected tools and services, meaning AI development becomes both time consuming and extremely costly.
Microsoft Fabric is designed to change this, allowing organizations to use a single product that provides all of the capabilities their developers need to extract insights from data and make it available to AI or end users. At launch, Fabric supports seven core workloads, including Data Factory, which provides more than 150 connectors to popular cloud and on-premises data sources with drag-and-drop functionality.
To make life even easier, Microsoft Fabric will also integrate its own copilot tool, similar to something like GitHub Copilot. Available in preview soon, this will allow users to interact with Fabric using natural language commands and a chatlike interface, making it easier to generate code and queries, create AI plugins, enable custom Q&A, create visualizations and more.
Microsoft Fabric is built atop an open data lake platform called OneLake, which acts as a single source of truth and eliminates the need to extract, move or replicate data. Through OneLake, Microsoft said, Fabric also enables persistent data governance and a single capacity pricing model that scales as usage grows, while its open nature removes the risk of proprietary lock-in.
In addition to easing AI development tasks, Microsoft Fabric will help every user to harness the power of data, Microsoft said. The platform natively integrates with Microsoft 365 applications such as Microsoft Excel. As a result, someone using Excel can directly discover and analyze data from OneLake and generate a Power BI report in a single click.
Alternatively, someone using Microsoft Teams can use Fabric to bring data directly into their chats, channels, meetings and presentations, Microsoft said. Alternatively, a sales person using Dynamics 365 can use Fabric and OneLake to unlock insights on customer relationships, business processes and more.
Azure data updates
Microsoft Fabric was the headline among a slate of data-related updates announced at Build today. The company also announced a host of new capabilities in Power BI aimed at increasing user’s productivity.
The biggest one is Copilot for Power BI, available in preview now, which makes it easier to create reports or narrative summaries based on Power BI data in seconds. Users can also ask questions about their data in their natural language to generate answers, charts and visualizations.
Power BI Direct Lake, meanwhile, is a new storage mode that helps avoid data replication, while Power BI Desktop Developer Mode enables developer-centric workflows for Power BI datasets and reports through Git integration.
Microsoft’s cloud database service Azure Cosmos DB received a variety of updates that Boost developer productivity and optimize costs. These include a new Burst Capacity option that’s said to Boost performance for developers by making better use of the database’s idle throughput capacity to handle traffic spikes.
Microsoft claims that databases using standard provisioned throughput with burst capacity enabled will be able to maintain performance during short bursts when requests exceed the throughput limit. That, the company added, gives customers a “cushion” if they’re under-provisioned and reduces the number of rate-limited requests.
Other capabilities for Cosmos DB include hierarchical partition keys for more efficient partitioning strategies, materialized views for Cosmos DB for NoSQL, and .NET and Java SDK telemetry and app insights.
Finally, Cosmos DB is being updated with hyperscale pools, a shared resource model for Hyperscale databases that’s now in preview. Developers can build and manage new apps in the cloud and scale multiple databases that have varying and unpredictable usage demands.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy
THANK YOU
Tue, 23 May 2023 03:00:00 -0500en-UStext/htmlhttps://siliconangle.com/2023/05/23/microsoft-debuts-fabric-single-integrated-data-analytics-platform-ai-business/Microsoft Pairs with VW to Cure HoloLens Virtual Motion Sickness
While we’ve previously covered the car industry’s plans for applying augmented reality to head-up displays projected onto cars’ windshields, an alternative solution would be for drivers to wear augmented reality glasses like Microsoft’s HoloLens 2.
Volkswagen thought that a great application for the HoloLens 2 was a driving program where drivers on a race track receive steering and braking cues through augmented reality. When VW went to the track to try the system in 2015, however, nothing happened.
It turns out that when the HoloLens 2 goes into a moving vehicle its sensors lose tracking, so the holograms it normally displays disappear. That’s when VW engineers made a call to Microsoft for some very intensive tech support.
Investigation revealed that HoloLens uses two main types of sensors to measure its motion — visible light cameras and an inertial measurement unit. The IMU measures acceleration and rotational speed.
Put this system into a moving car and it suffers the electronic equivalent of motion sickness. That happens when the motion we see is decoupled from the motion we feel, and when a HoloLens goes into a moving vehicle, the very same thing happens as the processor tries to reconcile what it’s seeing through the cameras with the motion its accelerometer detects.
Just as the problem is similar to human motion sickness, so is the solution. Looking out the window to get your bearings is often helpful for preventing motion sickness. For the HoloLens, that means connecting a GPS on the car to the glasses so that they have a firmer understanding of their position compared to their surroundings.
Joshua Elsdon, a Microsoft senior software engineer who worked on the project, had to find solutions from his Zurich apartment during the Covid shutdown. He mocked up a solution using a plastic box, sticking bits of tape inside to add visual texture and provide the HoloLens cameras elements to track.
He rode trams and buses around Zurich wearing a HoloLens headset, making sure its holograms held up as the vehicles moved. At night, Elsdon even rode up and down elevators in his apartment building to keep testing the technology.
“We had to do a lot of testing in my apartment,” Elsdon said. “These aren’t ideal development conditions. All of this stuff was done remotely and distributed across different countries, which was interesting.”
Now VW has a prototype of a system that could aid drivers. “We think mixed reality information is the most intuitive information we could provide to enhance our customers’ user experience,”, said the head of the data science team at Volkswagen Group Innovation, Andro Kleen. “Because what you see there, and what you need to process, is very close to what humans normally see and process. It’s not so abstract.”
Image courtesy of Microsoft Corp.
HoloLens 2 glasses look like regular glasses, with a few extra features attached.
The company says that it is expecting to use the HoloLens 2 in several primary areas. One is the fairly conventional application of augmented reality for engineers doing prototyping in R&D, where they can do the iterative development of head-up displays and underlying functions or for sensor data for automated driving.
There are also two new use cases for mobile applications of HoloLens technology now that the problem of its motion sickness has been solved. One is to support professional drivers of heavy-duty and transport vehicles. The goal is to help them control technical systems in familiar and comprehensible ways and to drive in complex environments like mines and dirty roads, where the glasses might display hidden road hazards.
Another use is for vehicle passengers, providing seamless user experiences for automated driving and for passenger entertainment by displaying navigation information, enabling gesture control of car functions, or by projecting additional information such as points of interest in the real dimension.
Microsoft has its own applications in mind. One is to provide help to maintenance technicians aboard ships at sea to perform necessary repairs. Previously, they were able to get assistance walking them through the procedures to do work while ships were docked. Now the system can be used on ships at sea.
“The more remote the equipment or machine is, the harder it is to get the expert on site,” says Marc Pollefeys, Microsoft director of science and an expert in 3D computer vision and machine learning who serves as a professor of computer science at ETH Zurich, a public research university. “This feature turned out to be critical to unlock HoloLens 2 for the maritime space.”
So far, HoloLens’ moving platform feature only works on large ships, but Microsoft says it will refine HoloLens 2 for use in elevators, trains, and other moving environments.
Wed, 31 May 2023 12:01:00 -0500entext/htmlhttps://www.designnews.com/automotive-engineering/microsoft-pairs-vw-cure-hololens-virtual-motion-sicknessEnvironmental and Water Resources Engineering MSEnvironmental and Water Resources Engineering MS - University at Buffalo
UB is opening its doors in a big way—and you’re invited. We have a number of campus visit opportunities that will provide you a firsthand look at one of the nation’s leading public research universities. Choose the event that fits your schedule.
School of Engineering and Applied Sciences
Program Description
Graduates of the Environmental and Water Resources Engineering MS program contribute to the solution of important environmental problems through careers in research, government, consulting or the private sector. subjects of current interest include the delivery of clean drinking water, treatment of air and wastewater pollutants, and the restoration of the Great Lakes and other aquatic ecosystems.
Program Contact
School of Engineering and Applied Sciences Office of Graduate Education 415 Bonner Hall Buffalo, NY 14260 Email: gradeng@buffalo.edu
In Person (100 percent of courses offered in person)
This program is officially registered with the New York State Education Department (SED).
Online programs/courses may require students to come to campus on occasion. Time-to-degree and number of credit hours may vary based on full/part time status, degree, track and/or certification option chosen. Time-to-degree is based on calendar year(s). Contact the department for details.
** At least one of the admissions tests are required for admission. Test and score requirements/exceptions vary by program. Contact the department for details.
Tue, 23 May 2023 12:00:00 -0500entext/htmlhttps://www.buffalo.edu/home/academics/degree_programs.host.html/content/authoritative/grad/programs/environmental-and-water-resources-engineering-ms.detail.htmlMicrosoft, Accenture to empower 13 Indian startups on social impact
Illustration: Ajay Mohanty
Microsoft and Accenture on Wednesday announced the third cohort of the Project Amplify programme, which will support 13 Indian startups with solutions focusing on clean tech, circularity, regenerative agriculture, education and skilling.
The programme will also support the startups with testing and validating proofs-of-concept, reimagining the impact of their solutions through design thinking sessions, access to the latest technologies and guidance from experts at Microsoft and Accenture.
"Through our continued collaboration with Microsoft, we are applying our joint expertise to support social impact startups and help bring their solutions to our enterprise clients across the globe, scaling their impact," Sanjay Podder, managing director and Technology Sustainability Innovation lead at Accenture, said in a statement.
Moreover, the programme will offer startups access to Microsoft technologies, including up to $1,50,000 in Azure credits, M365 and D365, Visual Studio and GitHub Enterprise access, enterprise-grade Azure engineering support, networking opportunities with other global social entrepreneurs and an array of go-to-market resources.
"In collaboration with Accenture and as part of our Entrepreneurship for Positive Impact Initiative, we are humbled to support bold innovators in India, driving systemic change through their sustainable businesses," Jean-Philippe Courtois, Executive Vice President and President, National Transformation Partnerships, Microsoft, said in a statement.
Launched in 2020, previous cohorts of the programme focused on addressing issues in food safety, livelihood, education, sustainability, and skilling.
--IANS
shs/ksk/
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
First Published: May 31 2023 | 1:25 PM IST
Tue, 30 May 2023 19:55:00 -0500en-UStext/htmlhttps://www.business-standard.com/companies/start-ups/microsoft-accenture-to-empower-13-indian-startups-on-social-impact-123053100342_1.htmlMicrosoft, Accenture to empower 13 Indian social impact startups
Microsoft and Accenture on Wednesday announced the third cohort of the Project Amplify programme, which will support 13 Indian startups with solutions focusing on clean tech, circularity, regenerative agriculture, education and skilling.
The programme will also support the startups with testing and validating proofs-of-concept, reimagining the impact of their solutions through design thinking sessions, access to the latest technologies and guidance from experts at Microsoft and Accenture.
"Through our continued collaboration with Microsoft, we are applying our joint expertise to support social impact startups and help bring their solutions to our enterprise clients across the globe, scaling their impact," Sanjay Podder, managing director and Technology Sustainability Innovation lead at Accenture, said in a statement.
Moreover, the programme will offer startups access to Microsoft technologies, including up to $1,50,000 in Azure credits, M365 and D365, Visual Studio and GitHub Enterprise access, enterprise-grade Azure engineering support, networking opportunities with other global social entrepreneurs and an array of go-to-market resources.
"In collaboration with Accenture and as part of our Entrepreneurship for Positive Impact Initiative, we are humbled to support bold innovators in India, driving systemic change through their sustainable businesses," Jean-Philippe Courtois, Executive Vice President and President, National Transformation Partnerships, Microsoft, said in a statement.
Launched in 2020, previous cohorts of the programme focused on addressing issues in food safety, livelihood, education, sustainability, and skilling.
Tue, 30 May 2023 20:57:00 -0500entext/htmlhttps://www.nationalheraldindia.com/science-tech/microsoft-accenture-to-empower-13-indian-social-impact-startups