Exam Code: C1000-083 Practice exam 2023 by Killexams.com team
Foundations of IBM Cloud V2
IBM Foundations teaching
Killexams : IBM Foundations teaching - BingNews https://killexams.com/pass4sure/exam-detail/C1000-083 Search results Killexams : IBM Foundations teaching - BingNews https://killexams.com/pass4sure/exam-detail/C1000-083 https://killexams.com/exam_list/IBM Killexams : IBM AI Foundations for Business No result found, try new keyword!This specialization will explain and describe the overall focus areas for business leaders considering AI-based solutions for business challenges. The first course provides a business-oriented ... Sat, 04 Feb 2023 09:31:00 -0600 text/html https://www.usnews.com/education/skillbuilder/ibm-ai-foundations-for-business-0_zn_b0Y_lEeqE2Q5NANyNFw Killexams : IBM Volunteer Invests in STEM Education via Mentoring IBM Volunteer Invests in STEM Education via Mentoring
ACCESSWIRE
2023-01-20

NORTHAMPTON, MA / ACCESSWIRE / January 20, 2023 / This Mentoring Month we're celebrating IBM mentors who help mentees connect to the world of work. Kyle volunteers with organizations from mentoring students in building robots to tutoring high school programming classes to teaching girls to program mobile apps for social impact.

Learn more about IBM skills programs and mentors at https://skillsbuild.org/

View additional multimedia and more ESG storytelling from IBM on 3blmedia.com.

Contact Info:
Spokesperson: IBM
Website: https://www.3blmedia.com/profiles/ibm
Email: info@3blmedia.com

SOURCE: IBM

View source version on accesswire.com:
https://www.accesswire.com/736071/IBM-Volunteer-Invests-in-STEM-Education-via-Mentoring

Fri, 20 Jan 2023 03:01:00 -0600 text/html https://itnewsonline.com/news/IBM-Volunteer-Invests-in-STEM-Education-via-Mentoring/12896
Killexams : IBM Demonstrates Groundbreaking Artificial Intelligence Research Using Foundational Models And Generative AI

AI has already demonstrated its power to revolutionize industries and accelerate scientific investigation. One field of AI research that has made stunning advancements is in the area of foundation models and generative AI, which enables computers to generate original content based on input data. This technology has been used to create everything from music and art to fake news reports.

OpenAI recently showcased the impressive capabilities of artificial intelligence by offering free access to ChatGPT, a state-of-the-art generative transformer model. The move generated widespread media attention and excitement among users, highlighting the massive potential of AI. This demonstration came just three months after the release of ChatGPT to the public.

Faced with the disruptive impact of OpenAI's GPT-3 model, Google and Microsoft were compelled to reveal AI integration plans for their respective search engines. The demonstration of AI's practical and powerful capabilities by OpenAI will no doubt raise the public’s expectations and demand for more advanced AI products in the future. OpenAI's move sparked one of the quickest and most significant disruptions in an industry segment that has ever been witnessed.

It is universally acknowledged that human life is of paramount importance. In this article, we shed light on the life-saving potential of AI by examining its practical applications in the creation of new antibiotics and other scientific AI tools. Innovative use of foundation models and generative AI has the capability to increase revenues, optimize processes, and streamline the creation and accumulation of knowledge, however, it also has the potential to save millions of lives around the world. This discussion aims to increase visibility for the importance of AI’s potential to save lives and highlight the need to expand its development and deployment in these areas.

From simple algorithms to breakthrough advances

Artificial intelligence (AI) had rather simple beginnings in the 1950s. It tackled simple algorithms and mathematical models designed for specific tasks. Much later, in the 1990s, AI research underwent a major shift towards machine learning algorithms that enabled computers to Boost their performance by analyzing patterns in data and transferring that knowledge to new applications. This shift gave rise to numerous breakthroughs in the field, including the development of deep learning algorithms that revolutionized areas such as computer vision and natural language processing (NLP). These advances have led to even more new achievements and further expanded the potential of AI.

Today, AI researchers continue to push the boundaries by developing new algorithms and models that can tackle increasingly complex tasks. AI and the size of models continues to evolve at an unprecedented pace, producing responses that are more human-like and expanding the range of tasks it can perform. Breakthroughs and applications are still being made in areas such as natural language processing (NLP), computer vision, and robotics. Despite its limitations and challenges, AI has proven to be a transformative force across a wide array of industries and fields, including healthcare, finance, transportation, and education.

Cutting-edge AI research by an IBM Master Inventor

IBM has one of the largest and most well-funded AI research programs in the world and I recently had the opportunity to discuss its program with Dr. Payel Das, principal research staff member and manager at IBM Research who is also an IBM master inventor.

Dr. Das has served as an adjunct associate professor in the department of Applied Physics and Applied Mathematics (APAM) at Columbia University. She is currently serving as an advisory board member of AMS at Stony Brook University. Dr. Das received her B.S. from Presidency College in Kolkata, India, and her M.S. from the Indian Institute of Technology in Chennai, India. She was awarded a Ph.D. in theoretical biophysics from Rice University in Houston, Texas. Dr. Das has coauthored more than 40 peer-reviewed publications. She has also received awards from Harvard Belfer Center TAPP 2021 and IEEE open source 2022. She also has a number of IBM awards including the IBM Outstanding Technical Achievement Award (the highest technical award at IBM), two IBM Research Division Awards, one IBM Eminence and Excellence Award, and two IBM Invention Achievement Awards.

As a member of the Trustworthy AI department and the generative AI lead within IBM Research, Dr. Das is currently focused on developing new algorithms, methods, and tools to develop generative AI systems that are created from foundation models.

Her team is also working on using synthetic data to make the AI models more trustworthy and to ensure fairness and robustness in downstream AI applications.

The power of synthetic data and how it advances AI

In our data-driven era, synthetic data has become an indispensable tool for testing and training AI models. This computer-generated information is cost-effective to produce, comes with automatic labeling, and avoids many of the ethical, logistical, and privacy challenges associated with training deep learning models on real-world data.

Synthetic data is critical for business applications as it offers solutions when real data is scarce or inadequate. One of the key advantages of synthetic data is its ability to be generated in vast quantities, making it ideal for training AI models. Furthermore, synthetic data can be designed to encompass a diverse range of variations and examples, leading to better generalization and usability of the model. These attributes make synthetic data an indispensable tool in the advancement of AI and its real-world applications.

It is crucial that the generated synthetic data adheres to user-defined controls to ensure it serves its intended purpose and minimizes potential risks. The specific controls required vary depending on the intended application and desired outcome. Ensuring that synthetic data aligns with these controls is essential to ensure its effectiveness and safety in real-world applications.

Transforming the future with universal representation models

The first AI models utilized feedforward neural networks, which were effective in modeling non-sequential data. However, they were not equipped to handle sequential data. To overcome this limitation, recurrent neural networks (RNNs) were developed in the 1990s, but it wasn't until around 2010 that they saw widespread implementation.

This breakthrough in technology expanded the capabilities of AI to process sequential data and paved the way for further advancements in the field. Then another type of AI model called a transformer, radically improved AI capabilities.

The transformer made its first appearance in a 2017 Google research paper that proposed a new type of neural network architecture. Transformers also incorporated self-attention mechanisms that allowed models to focus on relevant parts of an input and made more accurate predictions.

The self-attention mechanism is a defining feature that sets transformers apart from other encoder-decoder architectures. This mechanism proves especially beneficial in natural language processing as it enables the model to grasp the relationships between words in a sentence and recognize long-term dependencies. The transformer accomplishes this by assigning weights to each element in the sequence, based on its relevance to the task. This way, the model can prioritize the most crucial parts of the input, resulting in more context-aware and informed predictions or decisions. The integration of self-attention mechanisms has greatly advanced the capabilities of AI models in natural language processing.

According to Dr. Das, in latest years, there has been a shift away from RNNs as the primary architecture for natural language processing tasks. RNNs can be difficult to train and can suffer from vanishing gradient problems, which can make it challenging to learn long-term dependencies in language data. By contrast, transformers have been shown to be more effective in achieving state-of-the-art results on a variety of natural language processing tasks.

Unlocking the power of foundational models

Models that are trained using large-scale data and self-supervision techniques can produce a universal representation that is not specific to any particular task. This representation can then be utilized in various other applications with little to no further adjustment.

These models are referred to as "foundational models," a term coined by Stanford University in a 2021 research paper. Many of today's foundational models adopt transformer architecture and have proven versatile in a broad range of natural language processing (NLP) tasks. This is due to their pre-training on vast datasets, which results in powerful machine learning models ready for deployment. The use of foundational models has greatly impacted and improved the field of NLP.

Dr. Das and the IBM research team have been involved in a significant amount of AI research with foundation models and generative AI.

The above graphic shows how a foundation model can be used to build models for different fields by using text as the input data. They may or may not use transformer architecture. On the left side of the graphic, a large language model is shown, which progressively maps letters to words to sentences and finally to language.

The illustration on the right side of the graphic depicts a chemistry transformer model, which connects atoms to molecules and to chemistry. The same concept could be applied to build foundation models for biology or other related fields by representing biological or chemical molecules as text.

It's crucial to note that the transformer architecture is adaptable to a diverse array of fields, as long as the input data can be expressed in textual form. This versatility makes the transformer architecture a valuable tool for creating machine learning models in many domains.

Pushing the boundaries of creativity with generative AI

Generative models have the ability to create new and unique images, audio, or text for a variety of applications. These models have also enabled AI systems to become more effective at processing complex data and have opened up new possibilities for using AI in a wide range of applications.

Foundation models serve as a strong basis for creating generative models due to their ability to handle and learn from vast amounts of data. By adjusting the parameters of these models to focus on a specific task, like generating images or text, new generative AI models can be created that produce unique content within specific fields.

As an illustration, if the objective is to develop a generative AI model for art, a pre-trained foundational model would first be trained on a vast collection of art images. After successful training, it could then be utilized to produce novel and original pieces of art. Above is a trial of art created by an AI program named Dall.E2 in response to a prompt requesting it to generate a painted portrait of a human face, as perceived by AI.

Overcoming the small data challenge in generative AI

“When we first started working on generative AI,” Dr. Das said, “it occurred to us that one of our problems was learning from small data for any domain-specific or any industry-specific application.”

Generative AI models require large amounts of data to accurately learn and generate new, similar data. When working with small data sets, the performance and usefulness of these models can be limited. Dr. Das recognizes this challenge and understands that techniques like transfer learning and data augmentation can help Boost their performance in these situations.

Despite the challenges posed by small data sets for generative AI models, for each of the domains in the above graphic, a vast amount of unlabeled data exists in businesses. This data provides an opportunity to train custom foundational models, enabling the solution of previously thought unsolvable problems. This aligns with IBM Research's focus on exploring new AI capabilities through generative AI and pushing the boundaries of AI science.

Broad generative AI research

IBM has made significant contributions in each of the domains represented in the image. Their work is so extensive, it is difficult to cover all their achievements in a single article.

Synthesizing antimicrobials with generative AI

AI has the potential to revolutionize various fields and speed up scientific progress. As an example, Dr. Das and her research team have leveraged AI to develop innovative antimicrobials to fight against lethal antibiotic-resistant bacteria.

The Fight Against Superbugs

Antibiotics were first used to treat serious infections in the 1940s. Since then, antibiotics have saved millions of lives and transformed modern medicine. Yet the CDC estimates that about 47 million antibiotic treatments are prescribed each year for infections that don’t need antibiotics.

The overuse of antibiotics is a critical problem because it contributes to the development of antibiotic-resistant infections caused by common bacteria like E.coli and staphylococcus, as well as more dangerous and rare bacteria such as MRSA. These resistant infections are challenging to treat and can result in serious consequences like sepsis, organ malfunction, and death.

When traditional antibiotics are no longer able to effectively kill bacteria, it becomes much more difficult or even impossible to treat and control infections. These antibiotic-resistant bacteria—commonly called superbugs—can spread quickly and cause serious infections, particularly in hospitals and other healthcare settings. Superbugs can also be found in the environment, in food, and on surfaces, plus they can be transmitted from person to person.

It is a serious global health problem. Drug-resistant diseases kill 700,000 people annually around the world; by 2050, that number is expected to rise to 10 million deaths per year.

How bacteria outsmart antibiotics

Bacteria and viruses transform into superbugs through the activation of innate defense strategies that render antibiotics ineffective. These defense mechanisms can involve physical, chemical, or biological processes that safeguard the germs and enable them to escape or counteract danger to their existence. Such processes may produce enzymes that inactivate antibiotics, alter the bacterial cell wall making the organism less responsive to the drugs, or allow the bacteria to obtain genetic information from other bacteria that possess inherent immunity to antibiotics.

Streamlining drug development with AI

The conventional method of creating a new antimicrobial drug is a lengthy and expensive undertaking, frequently taking many years and a hefty sum of money before it can be commercially available. But latest advancements in Artificial Intelligence (AI) are revolutionizing the drug discovery and development process.

By utilizing AI's ability to generate and evaluate numerous possible drug candidates, researchers can swiftly pinpoint the most promising options and concentrate their efforts on them. This streamlines the drug development process, cutting down the time and cost involved and leading to the production of more efficient antimicrobial drugs at a quicker pace.

In a collaborative effort between Dr. Das and her team at IBM, as well as other organizations, they conducted a study to find innovative solutions to the problem of antimicrobial resistance. The study utilized AI to synthesize and evaluate 20 unique antimicrobial peptide designs, chosen from a pool of 90,000 sequences.

The AI models were specifically designed to combat antibiotic resistance, incorporating controls for broad-spectrum efficacy and low toxicity, and slowing down the emergence of resistance. This approach aimed to create effective solutions that not only fight against resistant bacteria but also minimize the risk of harmful side effects and prevent further resistance from developing.

The team tested these designs against a diverse range of gram-negative and gram-positive bacteria, which led to the identification of six successful drug candidates. The toxicity of these candidates was further evaluated in both a mouse model and a test tube.

AI-powered success

Dr. Das expressed excitement about the success of the design, pointing out that it embodies many of the sought-after characteristics expected in the next generation of drug candidates. The accompanying illustration outlines the plan and estimated duration of using AI to speed up the antimicrobial design process, which can be accomplished in just one and a half months, significantly quicker than the conventional method that takes several years.

The use of AI in accelerating the discovery of new antimicrobial drugs has proven to be a game-changer, offering clear benefits such as faster speed and reduced expenses. Moreover, AI models offer a more streamlined approach by directing the attention of researchers to the most promising leads. Additionally, generative AI enables scientists to design innovative drug compounds that boast unique features and elevated efficacy compared to existing drugs.

The researchers at IBM have harnessed the power of generative AI to streamline the development of new antimicrobial drugs. Additionally, they have used AI to create valuable tools, such as MolFormer and MolGPT, for predicting the properties of chemical molecules which plays a crucial role in various fields including drug discovery and material design.

Wrapping up

Generative AI has captured the attention of various industries, including music, art, healthcare, and pharmaceuticals, as one of the most exciting advancements in AI in latest times. Despite its limitations and challenges, AI continues to demonstrate its potential to revolutionize different fields.

Its ability to swiftly create and test life-saving medicines for antibiotic-resistant bacteria and other pathogens is a testament to its significance and promise.

With the latest buzz surrounding OpenAI's GPT-3 trial and the subsequent developments by Google and Microsoft, it's likely we will not only see a surge in AI-powered products in the coming year, I expect further disruptions to occur. Some may be trivial, but the hope is that many will feature meaningful integrations of AI that will be beneficial to the markets.

Analyst Notes:

  1. While some may question the absence of a discussion on the combination of facial recognition and AI, it is important to note that facial recognition technology and GPT models are separate AI technologies with distinct functions and methods. IBM, which was once a leader in human face data, has chosen not to work in the field of facial recognition due to the controversial political and privacy issues surrounding it. However, IBM is still actively involved in other AI modalities such as language processing, image recognition, graphics analysis, speech recognition, and various combinations in multimodal AI applications.
  2. A final remark on the market response to the release of GPT-3: It was noteworthy that Microsoft appeared well-prepared when the GPT-3 news broke, whereas Google seemed caught off guard and was forced to hold an emergency meeting with its founders to come up with a plan. In contrast, Microsoft had already planned how it was going to integrate AI into its operations. There is a significant disparity in search revenue between the two companies, with Microsoft earning a total of $22 billion in 2022 search revenue, while Google had $59 billion in the last quarter of 2022. It is surprising that Google was not more prepared to defend against the potential impact of GPT-3, considering the model’s search-applicable capabilities and the obvious threat it posed to one-third of Google's total revenue.
  3. DALL.E2 mentioned in the article is a cutting-edge deep learning model that generates digital images based natural language input. It is based on a version of OpenAI’s GPT-3.
  4. For more information about more of IBM’s AI research, you might be interested in my previous articles:

IBM CodeNet: Artificial Intelligence That Can Program Computers And Solve A $100 Billion Legacy Code Problem

IBM’s AutoAI Has The Smarts To Make Data Scientists A Lot More Productive – But What’s Scary Is That It’s Getting A Whole Lot Smarter

Note: Moor Insights & Strategy writers and editors may have contributed to this article. 

Moor Insights & Strategy provides or has provided paid services to technology companies like all research and tech industry analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and video and speaking sponsorships. The company has had or currently has paid business relationships with 8×8, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Ampere Computing, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, C3.AI, Calix, Cadence Systems, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cohesity, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, HYCU, IBM, Infinidat, Infoblox, Infosys, Inseego, IonQ,  IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Juniper Networks, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, LoRa Alliance, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, Multefire Alliance, National Instruments, Neat, NetApp, Nightwatch, NOKIA, Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), NXP, onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler. Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Fivestone Partners, Frore Systems, Groq, MemryX, Movandi, and Ventana Micro. 

Sun, 12 Feb 2023 23:29:00 -0600 Paul Smith-Goodson en text/html https://www.forbes.com/sites/moorinsights/2023/02/13/ibm-demonstrates-groundbreaking-artificial-intelligence-research-using-foundational-models-and-generative-ai/
Killexams : How To Know Layoffs Are Coming And What To Do About It

Remember when, not so long ago, tech companies couldn’t hire fast enough? Talent wars were fought and won all over Silicon Valley, as firms battled it out for the best in the business, each offering free-flowing perks more lavish than the next. As profits continued to climb, it seemed as if the party would never end. 

Then it did. And the layoff announcements began. First Amazon started letting go of what would be 18,000 employees, before companies such as Salesforce, IBM, Snap, and Coinbase followed suit. Twitter let go of almost half its workforce, and Meta laid off around 11,000 people. 

Last week, Google’s parent company, Alphabet, said it planned to lay off 12,000 of its people, Microsoft  said it would cut 10,000 employees, and Spotify said it would reduce its staff by 6 percent, about 600 people. In total, more than 216,000 tech employees have been laid off by more than 1,185 companies since the start of 2022, according to Layoffs.fyi, a site that tracks job cuts in the sector. 

Goodbye The Great Resignation, hello The Great Apprehension. It’s little wonder that 78 percentof American workers are scared about their job security, while 85 percent of workers rank job loss as a top concern, according to the latest Edelman Trust Barometer. Layoffs are among life’s most challenging experiences, after all.  

Know the signs 

So, how do you know when one might be coming? There are some signs to look out for: a memo from the CEO about cost savings or efficiency, and/or the departure of some top-level executives. There might be a shift in communication and transparency. Or maybe it just feels like everyone’s walking on eggshells.  

If any of this sounds familiar, now is not the time to idly sit and wait. Instead, arm yourself with options, and perhaps even see this as an opportunity to take your career to the next level.  

But first, know that being laid off isn’t a reflection of you and your hard work––it highlights your former company’s lack of planning, or a change in its strategy. Also, don’t worry about how it’s going to look on your resume. Getting laid off no longer carries the stigma it once did, particularly at this point in time.  

When you start to look for a new role, write down exactly what you want in your career, and what’s important to you in your next move. These values will help keep you aligned during your search. Do you want to work for a big company or a small company? Would you consider moving states or countries? Maybe it’s time for a different career?  

Enter, some good news (finally): There are lots of industries that are booming––the federal government, nonprofits, private companies, healthcare and higher education––and they’re just waiting to scoop up people with the right skills. What’s more, the labor market is still strong, with 10 million jobs currently open––up 60 percent pre-pandemic levels., meaning the overall unemployment rate in the U.S. is at a low 3.5 percent.  

And finally, not all companies have announced a round of job cuts in the last six months. Here are three firms that are hiring, plus, you can discover hundreds more open roles on The Hill Jobs. 

Senior Manager, International Tax, Deloitte 

As a Tax Senior Manager, you will work within an engagement team and draw on experience in accounting and taxation, assisting clients with their implementation of specific international tax structures and processes. Overseeing complex tax computation projects for clients in diverse industries and researching and preparing materials for consulting projects is also a key component of this role. To be suitable, you’ll need eight years’ of experience providing tax planning services or preparing and reviewing client work, preferably with a focus on international taxation. A Bachelor’s degree in accounting, finance or another business-related field is needed as is previous Big 4 experience, public accounting or consulting experience. Get the full job description here. 

Manager, Illiquid Credit Fund Management, The Carlyle Group 

The Carlyle Group is looking for a Manager, Illiquid Credit Fund Management to assist in all aspects of the fund management of certain Illiquid Credit Carry Funds, with involvement across the broader credit fund management group. Responsibilities will include accounting and reporting, oversight and review of quarterly fund closes, which includes review of related support for all balance sheet and income statement accounts. A Bachelor’s degree in accounting and/or finance is required for this position, with a CPA preferred. A minimum of five to seven years’ experience either in private credit or public accounting with supervisory experience, plus experience in alternatives, familiar with fund accounting/GP/LP structures and/or SMAs is necessary. Apply for this job here. 

Editorial Director (Senior Writer & Editor), Commonwealth Foundation 

The Commonwealth Foundation is seeking a forward-thinking Senior Writer & Editor to develop a world-class editorial capability and to lead the editorial strategy for owned and placed content. 

You’ll serve as lead writer and editor within the organization, pitch editors at prominent state and national outlets, and oversee the content pipeline, working closely with members of the Commonwealth Foundation production team to provide exceptional writing and editing services that advance policy goals and business objectives. Over five years’ experience as a policy and opinion writer, able to craft strong arguments and secure op-ed placements in prominent outlets at the state and national level is required as is a track record of identifying opportunities and executing on them rapidly, either independently or by mobilizing a team. Find out more about this role here. 

For more career opportunities and to find a role that you love, visit The Hill Jobs.  

Fri, 03 Feb 2023 00:00:00 -0600 en-US text/html https://thehill.com/lobbying/3841507-how-to-know-layoffs-are-coming-and-what-to-do-about-it/
Killexams : Massachusetts Institute of Technology researchers develop way to Boost machine-learning models’ reliability

Powerful machine-learning models are being used to help people tackle tough problems such as identifying disease in medical images or detecting road obstacles for autonomous vehicles. But machine-learning models can make mistakes, so in high-stakes settings it’s critical that humans know when to trust a model’s predictions.

Uncertainty quantification is one tool that improves a model’s reliability; the model produces a score along with the prediction that expresses a confidence level that the prediction is correct. While uncertainty quantification can be useful, existing methods typically require retraining the entire model to deliver it that ability. Training involves showing a model millions of examples so it can learn a task. Retraining then requires millions of new data inputs, which can be expensive and difficult to obtain, and also uses huge amounts of computing resources.

Researchers at MIT and the MIT-IBM Watson AI Lab have now developed a technique that enables a model to perform more effective uncertainty quantification, while using far fewer computing resources than other methods, and no additional data. Their technique, which does not require a user to retrain or modify a model, is flexible enough for many applications.

The technique involves creating a simpler companion model that assists the original machine-learning model in estimating uncertainty. This smaller model is designed to identify different types of uncertainty, which can help researchers drill down on the root cause of inaccurate predictions.

“Uncertainty quantification is essential for both developers and users of machine-learning models. Developers can utilize uncertainty measurements to help develop more robust models, while for users, it can add another layer of trust and reliability when deploying models in the real world. Our work leads to a more flexible and practical solution for uncertainty quantification,” says Maohao Shen, an electrical engineering and computer science graduate student and lead author of a paper on this technique.

Shen wrote the paper with Yuheng Bu, a former postdoc in the Research Laboratory of Electronics (RLE) who is now an assistant professor at the University of Florida; Prasanna Sattigeri, Soumya Ghosh, and Subhro Das, research staff members at the MIT-IBM Watson AI Lab; and senior author Gregory Wornell, the Sumitomo Professor in Engineering who leads the Signals, Information, and Algorithms Laboratory RLE and is a member of the MIT-IBM Watson AI Lab. The research will be presented at the AAAI Conference on Artificial Intelligence.

Quantifying uncertainty

In uncertainty quantification, a machine-learning model generates a numerical score with each output to reflect its confidence in that prediction’s accuracy. Incorporating uncertainty quantification by building a new model from scratch or retraining an existing model typically requires a large amount of data and expensive computation, which is often impractical. What’s more, existing methods sometimes have the unintended consequence of degrading the quality of the model’s predictions.

The MIT and MIT-IBM Watson AI Lab researchers have thus zeroed in on the following problem: Given a pretrained model, how can they enable it to perform effective uncertainty quantification?

They solve this by creating a smaller and simpler model, known as a metamodel, that attaches to the larger, pretrained model and uses the features that larger model has already learned to help it make uncertainty quantification assessments.

“The metamodel can be applied to any pretrained model. It is better to have access to the internals of the model, because we can get much more information about the base model, but it will also work if you just have a final output. It can still predict a confidence score,” Sattigeri says.

They design the metamodel to produce the uncertainty quantification output using a technique that includes both types of uncertainty: data uncertainty and model uncertainty. Data uncertainty is caused by corrupted data or inaccurate labels and can only be reduced by fixing the dataset or gathering new data. In model uncertainty, the model is not sure how to explain the newly observed data and might make incorrect predictions, most likely because it hasn’t seen enough similar training examples. This issue is an especially challenging but common problem when models are deployed. In real-world settings, they often encounter data that are different from the training dataset.

“Has the reliability of your decisions changed when you use the model in a new setting? You want some way to have confidence in whether it is working in this new regime or whether you need to collect training data for this particular new setting,” Wornell says.

Validating the quantification

Once a model produces an uncertainty quantification score, the user still needs some assurance that the score itself is accurate. Researchers often validate accuracy by creating a smaller dataset, held out from the original training data, and then testing the model on the held-out data. However, this technique does not work well in measuring uncertainty quantification because the model can achieve good prediction accuracy while still being over-confident, Shen says.

They created a new validation technique by adding noise to the data in the validation set — this noisy data is more like out-of-distribution data that can cause model uncertainty. The researchers use this noisy dataset to evaluate uncertainty quantifications.

They tested their approach by seeing how well a meta-model could capture different types of uncertainty for various downstream tasks, including out-of-distribution detection and misclassification detection. Their method not only outperformed all the baselines in each downstream task but also required less training time to achieve those results.

This technique could help researchers enable more machine-learning models to effectively perform uncertainty quantification, ultimately aiding users in making better decisions about when to trust predictions.

Moving forward, the researchers want to adapt their technique for newer classes of models, such as large language models that have a different structure than a traditional neural network, Shen says.

The work was funded, in part, by the MIT-IBM Watson AI Lab and the U.S. National Science Foundation.

Fri, 17 Feb 2023 21:34:00 -0600 en-US text/html https://indiaeducationdiary.in/massachusetts-institute-of-technology-researchers-develop-way-to-improve-machine-learning-models-reliability/
Killexams : How IBM’s new supercomputer is making AI foundation models more enterprise-budget friendly

Check out all the on-demand sessions from the Intelligent Security Summit here.


Foundation models are changing the way that artificial intelligence (AI) and machine learning (ML) are able to be used. All that power comes with a cost though, as building AI foundation models is a resource-intensive task.

IBM announced today that it has built out its own AI supercomputer to serve as the literal foundation for its foundation model–training research and development initiatives. Named Vela, it’s been designed as a cloud-native system that makes use of industry-standard hardware, including x86 silicon, Nvidia GPUs and ethernet-based networking.

The software stack that enables the foundation model training makes use of a series of open-source technologies including Kubernetes, PyTorch and Ray. While IBM is only now officially revealing the existence of the Vela system, it has actually been online in various capacities since May 2022.

“We really think this technology concept around foundation models has huge, tremendous disruptive potential,” Talia Gershon, director of hybrid cloud infrastructure research at IBM, told VentureBeat. “So, as a division and as a company, we’re investing heavily in this technology.”

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

The AI- and budget-friendly foundation inside Vela

IBM is no stranger to the world of high-performance computing (HPC) and supercomputers. One of the fastest supercomputers on the planet today is the Summit supercomputer built by IBM and currently deployed in the Oak Ridge National Laboratory.

The Vela system, however, isn’t like other supercomputer systems that IBM has built to date. For starters, the Vela system is optimized for AI and uses x86 commodity hardware, as opposed to the more exotic (and expensive) equipment typically found in HPC systems.

Unlike Summit, which uses the IBM Power processor, each Vela node has a pair of Intel Xeon Scalable processors. IBM is also loading up on Nvidia GPUs, with each node in the supercomputer packed with eight 80GB A100 GPUs. In terms of connectivity, each of the compute nodes is connected via multiple 100 gigabits-per-second ethernet network interfaces. 

Vela has also been purpose built for cloud native, meaning it runs Kubernetes and containers to enable application workloads. More specifically, Vela relies on Red Hat OpenShift, which is Red Hat’s Kubernetes platform. Vela has also been optimized to run PyTorch for ML training and uses Ray to help scale workloads.

IBM has also built out a new workload-scheduling system for its new cloud-native supercomputer. For many of its HPC systems, IBM has long used its own Spectrum LSF (load-sharing facility) for scheduling, but that system is not what the new Vela supercomputer is using. IBM has developed a new scheduler called MCAD (multicluster app dispatcher) to handle cloud-native job scheduling for foundation model AI training.

IBM’s growing foundation model portfolio

All that hardware and software that IBM put together for Vela is already being used to support IBM’s foundation model efforts.

“All of our foundation models’ research and development are all running cloud native on that stack on the Vela system and IBM Cloud,” Gershon said.

Just last week, IBM announced a partnership with NASA to help build out foundation models for climate science. IBM is also working on a foundation model called MoLFormer-XL for life sciences that can help create new molecules in the future.

The foundation model work also extends to enterprise IT with the Project Wisdom effort that was announced in October 2022. Project Wisdom is being developed in support of the Red Hat Ansible IT configuration technology. Typically, IT system configuration can be a complicated exercise that requires domain knowledge to do properly. Project Wisdom aims to bring a natural language interface to Ansible, whereby users will simply type in what they want and the foundation model will understand and then help execute the desired task.

Gershon also hinted at a new IBM foundation model for cybersecurity that has not yet been publicly detailed and is being developed using the Vela supercomputer.

“We haven’t said much about it externally, I think on purpose,” Gershon said about the foundation model for cybersecurity. “We do believe this technology is going to be transformational in terms of detecting threats.”

While IBM is building out a portfolio of foundation models, it is not intending to directly compete against some of the well-known general foundation models, such as OpenAI’s GPT-3.

“We are not focused on necessarily building general AI, whereas maybe some other players kind of state that more as the goal,” Gershon said. “We’re interested in foundation models because we think that it has tremendous business value for enterprise use cases.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Mon, 06 Feb 2023 14:00:00 -0600 en-US text/html https://venturebeat.com/ai/how-ibms-new-supercomputer-is-making-ai-foundation-models-more-enterprise-budget-friendly/
Killexams : IBM and NASA Collaborate to Research Impact of Climate Change with AI

New IBM Foundation Model Technology Leverages NASA Earth Science Data for Geospatial Intelligence

YORKTOWN HEIGHTS, N.Y., Feb. 1, 2023 /CNW/ -- IBM (NYSE: IBM) and NASA's Marshall Space Flight Center today announce a collaboration to use IBM's artificial intelligence (AI) technology to discover new insights in NASA's massive trove of Earth and geospatial science data. The joint work will apply AI foundation model technology to NASA's Earth-observing satellite data for the first time.

IBM Corporation logo. (PRNewsfoto/IBM)

Foundation models are types of AI models that are trained on a broad set of unlabeled data, can be used for different tasks, and can apply information about one situation to another. These models have rapidly advanced the field of natural language processing (NLP) technology over the last five years, and IBM is pioneering applications of foundation models beyond language.

Earth observations that allow scientists to study and monitor our planet are being gathered at unprecedented rates and volume. New and innovative approaches are required to extract knowledge from these vast data resources. The goal of this work is to provide an easier way for researchers to analyze and draw insights from these large datasets. IBM's foundation model technology has the potential to speed up the discovery and analysis of these data in order to quickly advance the scientific understanding of Earth and response to climate-related issues.

IBM and NASA plan to develop several new technologies to extract insights from Earth observations. One project will train an IBM geospatial intelligence foundation model on NASA's Harmonized Landsat Sentinel-2 (HLS) dataset, a record of land cover and land use changes captured by Earth-orbiting satellites. By analyzing petabytes of satellite data to identify changes in the geographic footprint of phenomena such as natural disasters, cyclical crop yields, and wildlife habitats, this foundation model technology will help researchers provide critical analysis of our planet's environmental systems.

Another output from this collaboration is expected to be an easily searchable corpus of Earth science literature. IBM has developed an NLP model trained on nearly 300,000 Earth science journal articles to organize the literature and make it easier to discover new knowledge. Containing one of the largest AI workloads trained on Red Hat's OpenShift software to date, the fully trained model uses PrimeQA, IBM's open-source multilingual question-answering system. Beyond providing a resource to researchers, the new language model for Earth science could be infused into NASA's scientific data management and stewardship processes.

"The beauty of foundation models is they can potentially be used for many downstream applications," said Rahul Ramachandran, senior research scientist at NASA's Marshall Space Flight Center in Huntsville, Alabama. "Building these foundation models cannot be tackled by small teams," he added. "You need teams across different organizations to bring their different perspectives, resources, and skill sets."

"Foundation models have proven successful in natural language processing, and it's time to expand that to new domains and modalities important for business and society," said Raghu Ganti, principal researcher at IBM. "Applying foundation models to geospatial, event-sequence, time-series, and other non-language factors within Earth science data could make enormously valuable insights and information suddenly available to a much wider group of researchers, businesses, and citizens. Ultimately, it could facilitate a larger number of people working on some of our most pressing climate issues."

Other potential IBM-NASA joint projects in this agreement include constructing a foundation model for weather and climate prediction using MERRA-2, a dataset of atmospheric observations. This collaboration is part of NASA's Open-Source Science Initiative, a commitment to building an inclusive, transparent, and collaborative open science community over the next decade.

For more information about this collaboration, visit the IBM Research Blog.

Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Press contact
Bethany Hill McCarthy
IBM Research
Bethany@ibm.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/ibm-and-nasa-collaborate-to-research-impact-of-climate-change-with-ai-301735386.html

SOURCE IBM

Cision

View original content to download multimedia: http://www.newswire.ca/en/releases/archive/February2023/01/c5512.html

Wed, 01 Feb 2023 02:21:00 -0600 en-US text/html https://www.yahoo.com/entertainment/ibm-nasa-collaborate-research-impact-110000641.html
Killexams : NASA partners with IBM to build AI foundation models to advance climate science

Check out all the on-demand sessions from the Intelligent Security Summit here.


U.S. space agency NASA isn’t just concerned about exploring outer space, it’s also concerned about helping humanity to learn more about the planet Earth and the impacts of climate change.

Today, NASA and IBM announced a partnership that will see the development of new artificial intelligence (AI) foundation models to help analyze geospatial satellite data, in a bid to help better understand and take action on climate change. To date, NASA has largely relied on the development of its own set of bespoke AI models to serve specific use cases. The promise of the foundation model approach is a large language model (LLM) that has been trained on lots of data that can serve as a more general purpose system that can be customized as needed.

Among the initial goals of the partnership is to train a foundation model on NASA’s Harmonized Landsat Sentinel-2 (HLS) dataset, which has petabytes of data collected from space about land use changes on Earth.

Beyond just helping to Boost the state of climate analysis on Earth, IBM is hopeful that the new foundation model that it develops jointly with NASA will have broader applicability and a positive impact for enterprise use cases of AI as well.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

“What we’re doing with NASA is going to help us push innovation all the way from infrastructure and hardware up through distributed systems platforms, middleware and the applications themselves,” Priya Nagpurkar, VP, hybrid cloud platform and developer productivity at IBM Research, said during a press briefing announcing the partnership. “And it will include driving advances in AI architectures, and even data management techniques.”

Houston, we have a (big data) problem

To put it mildly, NASA has a lot of data.

Rahul Ramachandran, senior research scientist at NASA’s Marshall Space Flight Center in Huntsville, Alabama, explained during the press briefing that NASA actually has the largest collection of Earth observation data. That data has been collected to support NASA’s science mission to understand planet Earth as a complex system. The data comes from various instruments, and currently includes an archive with 70 petabytes of data. The archive is projected to grow within a few years to 250 petabytes.

“Clearly, given the scale of the data that we have, we have a big data problem,” Ramachandran said. “Our goal is to make our data discoverable, accessible and usable for broad scientific use in applications worldwide.”

Ramachandran added that NASA is always looking for new approaches and technologies that will help streamline the research process, as well as lower the barrier to entry for end users to utilize the complex science data held by the space agency. That’s where the development of foundation models comes into play to make it easier to benefit from the data that NASA has collected. 

The potential for the foundation model that NASA is building with IBM could literally be life changing for humanity.

For example, Ramachandran said that building a foundation model that has satellite image data could make it easier for someone in a disaster area to identify the extent of flooding, where the model automatically maps where the flooding is occurring. Another example could be identifying damage in a hurricane zone.

PyTorch and open-source AI will also benefit

On the technology side, IBM will be making extensive use of a series of technologies, including Red Hat OpenShift, for running the AI-training workloads and open-source machine learning frameworks, notably PyTorch.

The open-source PyTorch machine learning framework was started at Facebook (now known as Meta) and spun off as its own PyTorch Foundation in September 2022. IBM Research has been an active contributor to PyTorch, integrating capabilities into the PyTorch 1.13 release framework to help run large workloads on commodity hardware.

Raghu Ganti, principal researcher at IBM Research, said PyTorch is a core element of IBM’s AI strategy.

“We solely rely on PyTorch for training all our foundation models,” Ganti said. 

Ganti added that IBM will continue to contribute back to the PyTorch community as it continues to innovate on the technology to build increasingly powerful foundation models. In Ganti’s view, the joint effort with NASA to build foundation models will have multiple applications and broad impact.

“I think it will augment and accelerate the scientific process in terms of building and solving specific science problems,” he said. “Instead of people having to build their own individual machine learning pipeline, starting from collecting the large volumes of training data.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Tue, 31 Jan 2023 13:00:00 -0600 en-US text/html https://venturebeat.com/ai/nasa-partners-with-ibm-to-build-ai-foundation-models-to-advance-climate-science/
Killexams : IBM and NASA Collaborate to Research Impact of Climate Change with AI

New IBM Foundation Model Technology Leverages NASA Earth Science Data for Geospatial Intelligence

YORKTOWN HEIGHTS, N.Y., Feb. 1, 2023 /PRNewswire/ -- IBM (NYSE: IBM) and NASA's Marshall Space Flight Center today announce a collaboration to use IBM's artificial intelligence (AI) technology to discover new insights in NASA's massive trove of Earth and geospatial science data. The joint work will apply AI foundation model technology to NASA's Earth-observing satellite data for the first time.

Foundation models are types of AI models that are trained on a broad set of unlabeled data, can be used for different tasks, and can apply information about one situation to another. These models have rapidly advanced the field of natural language processing (NLP) technology over the last five years, and IBM is pioneering applications of foundation models beyond language.

Earth observations that allow scientists to study and monitor our planet are being gathered at unprecedented rates and volume. New and innovative approaches are required to extract knowledge from these vast data resources. The goal of this work is to provide an easier way for researchers to analyze and draw insights from these large datasets. IBM's foundation model technology has the potential to speed up the discovery and analysis of these data in order to quickly advance the scientific understanding of Earth and response to climate-related issues. 

IBM and NASA plan to develop several new technologies to extract insights from Earth observations. One project will train an IBM geospatial intelligence foundation model on NASA's Harmonized Landsat Sentinel-2 (HLS) dataset, a record of land cover and land use changes captured by Earth-orbiting satellites. By analyzing petabytes of satellite data to identify changes in the geographic footprint of phenomena such as natural disasters, cyclical crop yields, and wildlife habitats, this foundation model technology will help researchers provide critical analysis of our planet's environmental systems.

Another output from this collaboration is expected to be an easily searchable corpus of Earth science literature. IBM has developed an NLP model trained on nearly 300,000 Earth science journal articles to organize the literature and make it easier to discover new knowledge. Containing one of the largest AI workloads trained on Red Hat's OpenShift software to date, the fully trained model uses PrimeQA, IBM's open-source multilingual question-answering system. Beyond providing a resource to researchers, the new language model for Earth science could be infused into NASA's scientific data management and stewardship processes.

"The beauty of foundation models is they can potentially be used for many downstream applications," said Rahul Ramachandran, senior research scientist at NASA's Marshall Space Flight Center in Huntsville, Alabama. "Building these foundation models cannot be tackled by small teams," he added. "You need teams across different organizations to bring their different perspectives, resources, and skill sets." 

"Foundation models have proven successful in natural language processing, and it's time to expand that to new domains and modalities important for business and society," said Raghu Ganti, principal researcher at IBM. "Applying foundation models to geospatial, event-sequence, time-series, and other non-language factors within Earth science data could make enormously valuable insights and information suddenly available to a much wider group of researchers, businesses, and citizens. Ultimately, it could facilitate a larger number of people working on some of our most pressing climate issues."

Other potential IBM-NASA joint projects in this agreement include constructing a foundation model for weather and climate prediction using MERRA-2, a dataset of atmospheric observations. This collaboration is part of NASA's Open-Source Science Initiative, a commitment to building an inclusive, transparent, and collaborative open science community over the next decade.

For more information about this collaboration, visit the IBM Research Blog.

Statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Press contact

Bethany Hill McCarthy

IBM Research

Bethany@ibm.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/ibm-and-nasa-collaborate-to-research-impact-of-climate-change-with-ai-301735386.html

SOURCE IBM

Tue, 31 Jan 2023 22:54:00 -0600 en text/html https://www.victoriaadvocate.com/ibm-and-nasa-collaborate-to-research-impact-of-climate-change-with-ai/article_d26c095e-d695-5482-a2ec-f8c063918215.html
Killexams : IBM, NASA bet on AI for research on impact of climate change

IBM and NASA’s Marshall Space Flight Centre have announced a collaboration to use IBM’s artificial intelligence (AI) technology to discover new insights in NASA’s massive trove of Earth and geospatial science data. Their combined work will apply AI ‘foundation model’ technology to NASA’s Earth-observing satellite data for the first time, an IBM spokesperson informed businessline.

Foundation model at work

AI ‘foundation model technology’ is quickly gaining traction through models like ChatGPT and it has been applied to NASA’s Earth-observing satellite data for the first time, the spokesperson explained. Its goal is to advance the scientific understanding of and response to Earth and climate-related issues like natural disasters and warming temperatures.

‘Foundation models’ are types of AI models that are trained on a broad set of unlabelled data, can be used for different tasks, and can apply information about one situation to another. These models have rapidly advanced the field of natural language processing (NLP) technology over the last five years, and IBM is pioneering applications of foundation models beyond language.

Large volumes of data

Earth observations that allow scientists to study and monitor our planet are being gathered at unprecedented rates and volume. New and innovative approaches are required to extract knowledge from these vast data resources. The goal of this work is to provide an easier way for researchers to analyse and draw insights from these large datasets, the spokesperson said. IBM’s foundation model technology has the potential to speed up the discovery and analysis of these data in order to quickly advance the scientific understanding of Earth and response to climate-related issues. 

SWIR false colour composite of the snow-capped Himalayas on November 28, 2022.

SWIR false colour composite of the snow-capped Himalayas on November 28, 2022. | Photo Credit: NASA IMPACT

IBM and NASA are planning to develop several new technologies to extract insights from Earth observations. One project will train an IBM geospatial intelligence foundation model on NASA’s Harmonised Landsat Sentinel-2 (HLS) dataset, a record of land cover and land use changes captured by Earth-orbiting satellites. By analysing petabytes of satellite data to identify changes in the geographic footprint of phenomena such as natural disasters, cyclical crop yields, and wildlife habitats, this foundation model technology will help researchers provide critical analysis of our planet’s environmental systems.

Develops new NLP model

Another output from this collaboration is expected to be an easily searchable corpus of Earth science literature. IBM has developed an NLP model trained on nearly three lakh Earth science journal articles to organise the literature and make it easier to discover new knowledge. Containing one of the largest AI workloads trained on Red Hat’s OpenShift software to date, the fully trained model uses PrimeQA, IBM’s open-source multilingual question-answering system. Beyond providing a resource to researchers, the new language model for Earth science could be infused into NASA’s scientific data management and stewardship processes.

Rahul Ramachandran, senior research scientist at NASA’s Marshall Space Flight Center in Huntsville, Alabama, said the beauty of foundation models is that the models can potentially be used for many downstream applications. “Building these foundation models cannot be tackled by small teams. You need teams across different organisations to bring their different perspectives, resources, and skill sets.” 

Valuable insights

Raghu Ganti, principal researcher at IBM, said foundation models have proven successful in NLP, and it’s time to expand that to new domains and modalities important for business and society.

“Applying foundation models to geospatial, event-sequence, time-series, and other non-language factors within Earth science data could make enormously valuable insights and information suddenly available to a much wider group of researchers, businesses, and citizens. Ultimately, it could facilitate a larger number of people working on some of our most pressing climate issues.”

Other potential IBM-NASA joint projects in this agreement include constructing a foundation model for weather and climate prediction using MERRA-2, a dataset of atmospheric observations, the spokesperson for IBM said. This collaboration is part of NASA’s Open-Source Science Initiative, a commitment to building an inclusive, transparent, and collaborative open science community over the next decade. 

Thu, 09 Feb 2023 10:00:00 -0600 en text/html https://www.thehindubusinessline.com/news/science/ibm-nasa-bet-on-ai-for-research-on-impact-of-climate-change/article66488680.ece
C1000-083 exam dump and training guide direct download
Training Exams List