HPE2-N69 approach - Using HPE AI and Machine Learning Updated: 2024
|Memorize these HPE2-N69 dumps and register for the test
Exam Code: HPE2-N69 Using HPE AI and Machine Learning approach January 2024 by Killexams.com team
|Using HPE AI and Machine Learning
HP Learning approach
Other HP examsHPE0-S22 Architecting Advanced HPE Server Solutions
HPE2-K42 Designing HPE Nimble Solutions
HPE6-A47 Designing Aruba Solutions
HPE0-S54 Designing HPE Server Solutions
HPE0-S55 Delta - Designing HPE Server Solutions
H19-301 HCPA-IP Network (Huawei Certified Pre-sales Associate-IP Network)
HPE0-J50 Integrating Protected HPE Storage Solutions
HPE6-A68 Aruba Certified ClearPass Professional (ACCP)
HPE6-A70 Aruba Certified Mobility Associate Exam
HPE6-A71 Aruba Certified Mobility Professional exam 2023
HPE0-S58 Implementing HPE Composable Infrastructure Solutions
HPE0-V14 Building HPE Hybrid IT Solutions
HPE2-CP02 Implementing SAP HANA Solutions
HPE0-S57 Designing HPE Hybrid IT Solutions
HPE2-E72 Selling HPE Hybrid Cloud Solutions
HPE6-A72 Aruba Certified Switching Associate
HPE6-A73 Aruba Certified Switching Professional
HPE6-A82 Aruba Certified ClearPass Associate (ACCA)
HPE2-W07 Selling Aruba Products and Solutions
HPE2-T37 Using HPE OneView
HPE6-A48 Aruba Certified Mobility Expert (ACMX)
HPE6-A80 Aruba Certified Design Expert Written
HPE2-N69 Using HPE AI and Machine Learning
HPE6-A44 Aruba Certified Mobility Professional (ACMP) V8
HPE0-J69 HPE Storage Solutions?
HPE0-S59 HPE Compute Solutions
HPE0-V25 HPE ATP Hybrid Cloud Solution
HPE6-A84 Aruba Certified Network Security Expert Written
HPE6-A69 Aruba Certified Switching Expert Written
HPE0-S60 HPE ASE - Compute Solutions
HPE6-A49 Aruba Certified Design Expert (ACDX) V8
HPE0-J58 Designing Multi-Site HPE Storage Solutions
HPE0-V17 Creating HPE Data Protection Solutions
HPE0-V15 HPE ATP - Hybrid IT Solutions
HPE6-A66 Aruba Certified Design Associate (ACDA)
HPE6-A78 Aruba Certified Network Security Associate (HCNSA)
HPE7-A01 Aruba Certified Campus Access Professional
HPE6-A75 Aruba Certified Edge Professional (ACEP)
HPE6-A81 Aruba Certified ClearPass Expert (ACCX)
HPE6-A85 Aruba Certified Associate - Campus Access (ACA)
HPE3-U01 Aruba Certified Network Technician (ACNT)
HPE0-V26 HPE ATP - Hybrid IT Solutions
HPE0-J68 HPE Storage Solutions
HPE0-P27 Configuring HPE GreenLake Solutions
|killexams.com has its experts working continuously to collect, validate and update HPE2-N69 dumps. That's why you will not find any other such valid and comprehensive HPE2-N69 dumps provider on internet. We claim that, if you memorize all of our HPE2-N69 dumps questions and practice with our VCE exam simulator, we certain that you will pass your exam at first attempt.
What is one of the responsibilities of the conductor of an HPE Machine Learning Development Environment cluster?
A. it downloads datasets for training.
B. It uploads model checkpoints.
C. It validates trained models.
D. It ensures experiment metadata is stored.
What type of interconnect does HPE Machine learning Development System use for high-speed, agent-to-agent
A. Remote Direct Memory Access (RDMA) overconverged Ethernet (RoCE)
D. Data Center Bridging (OCB)-enabled Ethernet
At what FQDN (or IP address) do users access the WebUI Tor an HPE Machine Learning Development cluster?
A. Any of the agent's in a compute pool
B. A virtual one assigned to the cluster
C. The conductor's
D. Any of the agent's in an aux pool
Your cluster uses Amazon S3 to store checkpoints. You ran an experiment on an HPE
Machine Learning Development Environment cluster, you want to find the location tor the best checkpoint created
during the experiment.
What can you do?
A. In the experiment config that you used, look for the "bucket" field under "hyperparameters." This is the UUID for
B. Use the "det experiment obtain -top-n I" command, referencing the experiment I
C. In the Web Ul, go to the Task page and click the checkpoint task that has the experiment I
D. Look for a "determined-checkpoint/" bucket within Amazon S3, referencing your experiment I
A customer mentions that the ML team wants to avoid overfitting models.
What does this mean?
A. The team wants to avoid wasting resources on training models with poorly selected hyperparameters.
B. The team wants to spend less time on creating the code tor models and more time training models.
C. The team wants to avoid training models to the point where they perform less well on new data.
D. The team wants to spend less time figuring out which CPUs are available for training models.
What is a benefit of HPE Machine Learning Development Environment, beyond open source Determined AI?
A. Automated user provisioning
B. Pipeline-based data management
C. Distributed training
D. Automated hyperparameter optimization (HPO)
What are the mechanics of now a model trains?
A. Decides which algorithm can best meet the use case for the application in question
B. Adjusts the model's parameter weights such that the model can Better perform its tasks
C. Tests how accurately the model performs on a wide array of real world data
D. Detects Data drift of content drift that might compromise the ML model's performance
An ml engineer wants to train a model on HPE Machine Learning Development Environment without implementing
hyper parameter optimization (HPO).
What experiment config fields configure this behavior?
A. profiling: enabled: false
B. hyperparameters; optimizer:none
C. searcher: name: single
D. resources: slots_per_trial: 1
You are meeting with a customer how has several DL models deployed. Out wants to expand the projects.
The ML/DL team is growing from 5 members to 7 members. To support the growing team, the customer has assigned
2 dedicated IT start. The customer is trying to put together an on-prem GPU cluster with at least 14 CPUs.
What should you determine about this customer?
A. The customer is not ready for an HPE Machine Learning Development solution, but you could recommend open-
source Determined Al.
B. The customer is not ready for an HPE Machine Learning Development solution. Out you could recommend an
educational HPE Pointnext ASPS workshop.
C. The customer is a key target for HPE Machine Learning Development Environment, but not HPE Machine Learning
D. The customer is a key target for an HPE Machine Learning Development solution, and you should continue the
A customer is using fair-share scheduling for an HPE Machine Learning Development Environment resource pool.
What is one way that users can obtain relatively more resource slots for their important experiments?
A. Set the weight to a higher than default value.
B. Set the weight to a lower than default value.
C. Set the priority to a lower than default value.
D. Set the priority to a higher than default value.
Ann Livermore, executive vice president of Hewlett-Packard, told channel partners that storage is best when part of a solution. "When you sell a ProLiant &#91;server&#93;, along with it, sell storage or support service," she said. "And if you sell services with every ProLiant that you sell or along with every blade system, that creates an annuity stream where you can then keep renewing the services on an ongoing basis."
Ken Faircloth, senior account manager at KTI Kanatek Technologies, an Ottawa-based solution provider, said focusing on the solution would be easier if HP's own people would also do so.
"We get HP reps who talk products, products, products," Faircloth said. "But they have to start talking solutions. If they talk solutions, sales will go through the roof."
Solutions are especially important with blade servers. "You can blade Unix," Livermore said. "You can blade x86&#8212;whether it's Windows or Linux. You can blade storage. You can blade PCs. They're all inside of a smart enclosure that has power and cooling and management and virtualization built in."
Kyle Fitze, director of SAN product marketing at HP, said that the company is looking to enhance the storage component of its BladeSystem blade server environment.
For instance, HP plans to supply its new c-Class blades the ability to work with the company's Enterprise File Services Clustered Gateway, which allows files stored on SAN arrays to be served in a NAS file format.
Also planned is a storage blade that provides JBOD (non-RAID) storage capacity, Fitze said. "This extends the ability to use BladeSystem servers for applications which need the extra capacity," he said.
For the long term, Fitze said HP also will be looking to virtualize storage within the BladeSystem.
That will be important as the market continues to adopt server virtualization technologies, said Victor Villegas, director of marketing and alliances at Adeara, a Sunnyvale, Calif.-based HP solution provider. "Once the virtualization market takes ahold of the industry, we will do well with blade storage virtualization," he said.
Also anticipated from HP storage are enhancements to local and remote replication technology for its EVA family of arrays to increase replication performance, and tools to simplify EVA deployment, Fitze said. HP also hopes to add SAS to its MSA family of entry-level arrays, Fitze said.
Growing up, my sister and I were pretty well-behaved kids who rarely got into any kind of serious trouble. We got good grades, followed the rules, and generally acted in a responsible manner that enabled our parents to adopt a fairly hands-off approach to parenting. There were, however, certain incidents that resulted in one of them loudly announcing “OK, new rule” and subsequently enacting fresh household legislation intended to put an end to the nonsense and property damage.
Earlier this week, I went up for my first flight of the year on a cold and overcast New Year’s Day. And while I always make an effort to objectively evaluate my decision making in my still-new-to-me Cessna 170, I didn’t expect to learn three new lessons and enact three new personal rules after one short flight. But that’s precisely what happened as I kicked off a new year of flying.
The first lesson took place before I’d even completed the preflight. My first clue that something was amiss occurred when I retrieved my tire pressure gauge from the small flight bag I keep in the front passenger seat. I noticed bits of shredded paper towel littering the bag. Making a mental note to continue placing my custom-cut aluminum bands around each tire to keep mice out of the airplane, I continued my preflight.
A short time later, while carefully inspecting my tailwheel on my hands and knees, I made eye contact with the culprit. There, staring at me in the face from a small access hole at the base of the rudder, was a small brown field mouse.
A few choice selections of profanity scared it back into the fuselage. Fortunately, however, some light drumming on the side of the empennage scared it back out of the hole, and it leaped from the airplane and scurried away. A closer inspection of my flight bag connected the dots—I’d left a couple of energy bars in the bag after my last flight, and the mouse had set up camp, helping itself to the feast.
New rule No. 1: No more leaving energy bars in the airplane.
Thoroughly preflighted and apparently mouse-free, I hopped in and started the engine. Because I was the only one at the airfield, I opted to take a shorter route from my hangar to the runway. This route utilizes a dirt driveway that borders a large ditch. And it wasn’t until I advanced the throttle and tested the brakes that I realized there was a gradual slope all the way from my hangar to that ditch.
Brakes locked, the airplane slid toward the ditch at a crawling pace as I willed it to come to a stop. I used every trick my lifetime of winter driving in the Great Lakes region had taught me, including releasing the brakes to obtain some directional traction, but the ditch loomed ever closer. Just as I was creating a plan to pull the mixture and at least save the prop and engine, the right main mercifully encountered a small patch of gravel and the airplane ground to a stop.
As I only recently moved into my new hangar, I’d never taken this taxi route in the winter. Accordingly, I’d never noticed the gradual slope and treacherous ditch. It was a chilling eye-opener, and I was ultimately able to cling to the hallowed strip of gravel and proceed to the runway safely.
New rule No. 2: No taking the short taxi route with snow or ice on the ground.
Run-up complete, I trundled my way out onto the 3,100-by-90-foot grass strip and backtaxied to the end. On the way out, I made a mental note of an icy, muddy patch in the center of the runway about 600 feet from the threshold. I’m no stranger to operating on snow at this strip, but the odd combination of 1 to 2 inches of icy snow and muddy, unfrozen soil beneath robbed me of traction and made it challenging to turn around. An old skiplane trick of full forward yoke and some short blasts of power finally brought the tail around, and I was good to go at last.
The brisk temperature rewarded me with a density altitude of around 1,500 feet below sea level. I made a mental note to brag about this to a certain California-based friend who takes every opportunity to boast about his state’s warm winter climate. My beloved McCauley seaplane prop clawed through the thick winter air, making the most of my airplane’s modest 145 hp and clearing the muddy patch with ease.
The takeoff was uneventful, but the variable wind had developed into a healthy crosswind from the left. I kept this in mind, and on downwind, I took a step back and evaluated the situation. I was barely able to keep the airplane out of the weeds during my taxi out to the runway. Once there, I had difficulty turning around. And now I was setting up to land on a particularly slick surface with a crosswind.
Much as I wanted to spend an hour or two in the air, hammering out landing after landing, I decided not to press my luck. I was handling the hazards successfully thus far and could likely continue my pattern work safely, but doing so would expose me to an element of risk that, while not unmanageable, was not at all necessary. Leaving some power in during the flare, I made sure to bleed off as much energy as possible before touching down and did so safely and with no issues.
Turning around was a different story. Once again, I struggled to turn around on the runway and skated my way back to the hangar, making sure to take the long route back. I was happy to call it a day and abandon the out-of-kilter risk-reward scenario in favor of some University of Michigan football in the Rose Bowl from the comfort of my couch.
New rule No. 3: No pattern work on snowy runways with a crosswind in excess of 5 knots.
Although I only logged 0.1 hours of flight time, it was a particularly educational flight. Best of all, with the exception of a couple of energy bars and a shredded paper towel, there was no property damage to contend with. That’s a win no matter how you chalk it up, and with that, the day of new rules was more successful than any my sister and I had experienced in our household years ago.
Our friends over at HP, Inc. have prepared a special set of compelling technology predictions for the year ahead. From the company’s point of view 2024 should be quite a year! Straight from the executive suite, you’ll learn about what’s predicted to happen with AI, GenAI, LLMs, BI, data science, data engineering, and much more. Enjoy these special perspectives from one of our industry’s best known movers and shakers.
In 2024, AI will supercharge social engineering attacks on an unseen scale, spiking on red letter days
Alex Holland, Senior Malware Analyst at HP Inc.
“In 2024, cybercriminals will capitalize on AI to supercharge social engineering attacks on an unseen scale: generating impossible-to-detect phishing lures in seconds. These lures will appear highly plausible and look indistinguishable from the real thing, making it harder than ever for employees to spot – even those that have had phishing training.
“We are likely to see mass AI-generated campaigns spike around key dates. For instance, 2024 stands to see the most people in history vote in elections – using AI, cybercriminals will be able to craft localized lures targeting specific regions with ease. Similarly, major annual events, such as end of year tax reporting, sporting events like the Paris Olympics and UEFA Euro 2024 tournament, and retail events like Black Friday and Singles Day, will also supply cybercriminals hooks to trick users.
“With faked emails becoming indistinguishable from legitimate ones, businesses cannot rely on employee education alone. To protect against AI-powered social engineering attacks, organizations must create a virtual safety net for their users. An ideal way to do this is by isolating and containing risky activities, wrapping protection around applications containing sensitive data, and preventing credential theft by automatically detecting suspicious features of phishing websites. Micro-virtualization creates disposable virtual machines that are isolated from the PC operating system, so even if a user does click on something they shouldn’t, they remain protected.
“Organizations will also use AI to Excellerate defence against the rise in attacks. High-value phishing targets will be identified and least privilege applied accordingly, and threat detection and response will be enhanced by continually scanning for and automatically remediating potential threats.”
Beyond phishing, the rise of LLMs will make the endpoint a prime target for cybercriminals in 2024
Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc.
“One of the big trends we expect to see in 2024 is a surge in use of generative AI to make phishing lures much harder to detect, leading to more endpoint compromise. Attackers will be able to automate the drafting of emails in minority languages, scrape information from public sites – such as LinkedIn – to pull information on targets, and create highly-personalized social engineering attacks en masse. Once threat actors have access to an email account, they will be able to automatically scan threads for important contacts and conversations, and even attachments, sending back updated versions of documents with malware implanted, making it almost impossible for users to identify malicious actors. Personalizing attacks used to require humans, so having the capability to automate such tactics is a real challenge for security teams. Beyond this, continued use of ML-driven fuzzing, where threat actors can probe systems to discover new vulnerabilities. We may also see ML-driven exploit creation emerge, which could reduce the cost of creating zero day exploits, leading their greater use in the wild.
“Simultaneously, we will see a rise in ‘AI PC’s’, which will revolutionize how people interact with their endpoint devices. With advanced compute power, AI PCs will enable the use of “local Large Language Models (LLMs)” – smaller LLMs running on-device, enabling users to leverage AI capabilities independently from the Internet. These local LLMs are designed to better understand the individual user’s world, acting as personalized assistants. But as devices gather vast amounts of sensitive user data, endpoints will be a higher risk target for threat actors.
“As many organizations rush to use LLMs for their chatbots to boost convenience, they open themselves up to users abusing chatbots to access data they previously wouldn’t have been able to. Threat actors will be able to socially engineer corporate LLMs with targeted prompts to trick them into overriding its controls and giving up sensitive information – leading to data breaches.
“And, at a time when risks are increasing, the industry is also facing a skills crisis – with the latest figures showing 4 million open vacancies in cybersecurity; the highest level in five years. Security teams will have to find ways to do more with less, while protecting against both known and unknown threats. Key to this will be protecting the endpoint and reducing the attack surface. Having strong endpoint protection that aligns to Zero Trust principles straight out-of-the-box will be essential. By focusing on protecting against all threats – known and unknown – organizations will be much better placed in the new age of AI.”
In 2024, the democratization of AI tools will lead to a rise in more advanced attacks against firmware and even hardware
Boris Balacheff, Chief Technologist for System Security Research and Innovation at HP Inc.
“In 2024, powerful AI will be in the hands of the many, making sophisticated capabilities more accessible at scale to malicious actors. This will not only accelerate attacks in OS and application software, but also across more complex layers of the technology stack like firmware and hardware. Previously, would-be threat actors needed to develop or hire very specialist skills to develop such exploits and code, but the growing use of Generative AI has started to remove many of these barriers. This democratization of advanced cyber techniques will lead to an increase in the proliferation of more advanced, more stealthy, or more destructive attacks. We should expect more cyber events like moonbounce and cosmic strand, as attackers are able find or exploit vulnerabilities to get a foothold below a device Operating System. latest security research even shows how AI will enable malicious exploit generation to create trojans all the way into hardware designs, promising increased pressure in the hardware supply chain.
“In the year ahead, businesses will need to prioritize actively managing hardware and firmware security across the device lifecycle, from the points of delivery to recycling or decommissioning. With today’s highly distributed IT infrastructures, it is critical to be able to rely on fleets of endpoint devices to operate as expected, throughout their lifetime. This means defending and monitoring the security, and in particular the integrity, of device hardware and firmware is increasingly central to protecting the supply chain of any IT infrastructure. For years, this area of hardware and firmware security has been largely neglected, with businesses assuming that, they were mostly protected by the high barrier to entry for such attacks. But with increased attacker pressure, organizations must make internal investments or identify the right partners to help bring device hardware and firmware security management in line with the level of maturity they already expect in software security. And given hardware procurement lifecycles, organizations should start now by setting requirements for robust built-in endpoint security, which is designed to support the secure verification, management, monitoring and remediation of hardware and firmware.
In 2024, attackers will continue to seek ways into the ground floor, infecting devices before they are even onboarded
Michael Heywood, Business Information Security Officer at HP Inc.
“In 2024, we’ll see the attention on software and hardware supply chain security grow, as attackers seek to infect devices as early as possible – before they have even reached an employee or organization. With awareness and investment in cybersecurity growing each year, attackers have recognized that device security at the firmware and hardware layer has not maintained pace. Breaches here can be almost impossible to detect, such as firmware backdoors being used to install malicious programs and execute fraud campaigns on Android TV boxes. The increasing sophistication of AI also means attackers will seek to create malware targeted at the software supply chain, simplifying the process of generating malware disguised as secure applications or software updates.
“In response to such threats, organizations will need to think more about who they partner with, making cybersecurity integral to business relationships with third parties. Organizations will need to spend time evaluating software and hardware supply chain security, validating the technical claims made by suppliers, to ensure they can truly trust vendor and partner technologies. Organizations can no longer take suppliers’ word on security at face value. A risk-based approach is needed to Excellerate supply chain resilience by identifying all potential pathways into the software or product. This requires deep collaboration with suppliers – yes or no security questionnaires will no longer be enough. Organizations must demand a deeper understanding of their partners’ cybersecurity posture and risk – this includes discussing how incidents have changed the way suppliers manage security or whether suppliers are segregating corporate IT and manufacturing environments to shut down attackers’ ability to breach corporate IT and use it as a stepping stone to the factory.
“A risk-based approach helps ensure limited security resources are focused on addressing the biggest threats to effectively secure software and hardware supply chains. This will be especially important as supply chains come under increasing scrutiny from Nation State threat actors and cybercrime gangs.”
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideBIGDATANOW
Using a structured approach when communicating can help you prioritize what you need to convey. In this article, the author introduces his “What, So What, Now What” framework. Much like the Swiss Army knife, known for its versatility and reliability, this structure is flexible and can be used in many different communication situations. The structure is comprised of three simple questions: 1) What: Describe and define the facts, situation, product, position, etc. 2) So What: Discuss the implications or importance for the audience. In other words, the relevance to them. 3) Now What: Outline the call-to-action or next steps such as taking questions or setting up a next meeting.
Effective communication has never been more critical in our rapidly evolving world, where every conversation, negotiation, meeting, or pitch could impact our personal and professional success. We are much more likely to achieve our communication goals if we package our messages in a clear, concise, logical manner.
As we stand on the cusp of 2024, technology continues its relentless march forward, shaping how we live, work, and interact with the world around us.
Indeed, 2023 was an exciting year in tech, and some of you may enjoy my podcast from a few weeks ago, where I opined during my SmartTechCheck podcast on what I believe were last year’s most significant tech issues.
Anticipating the trends that will define the technological landscape in 2024 requires a hurry understanding of current developments and foresight into the evolving needs of society. It’s no easy task because of the expansive list of hot syllabus to choose from.
Regardless, let’s delve into the top five technology predictions for 2024 from my vantage point. While I believe it will be difficult to argue the trends I’ve chosen, some might find the context and perspective controversial.
ARM-based PCs finally become a big deal.
As we enter 2024, the tech industry is abuzz with the anticipation of Arm-based Windows PCs becoming a significant category. Two key players poised to benefit from this transition are AMD and Qualcomm.
This shift represents a departure from the traditional x86 architecture, with Arm’s energy-efficient design offering a compelling alternative for Windows-powered devices.
Microsoft’s latest iteration of Windows 11, which runs Arm-based processors, is getting close to providing an x86-like Windows experience with little app or peripheral compatibility problems. It also provides power and battery life benefits that classic x86 processors simply can’t offer.
AMD has been at the forefront of innovation with its Zen architecture, which has proven to be a game-changer in the CPU market. The collaboration between AMD and Microsoft in embracing Arm for Windows PCs aligns with the efficiency and performance goals that both companies have been striving to achieve.
AMD’s expertise in delivering powerful yet power-efficient processors position it as a critical player in the Arm-based Windows PC landscape. Consumers can expect AMD-powered devices to provide a balance of high-performance computing and energy efficiency, catering to a diverse range of applications from gaming to productivity.
In past years, Qualcomm’s initial Snapdragon-based PCs did not capture significant market share as users and IT managers were reluctant to embrace a new class of laptops that might have app compatibility challenges and not provide sufficient performance for video and photo editing.
However, the company’s latest Snapdragon X Elite processors are showing promising performance and battery life results (in the same ballpark as what Apple has achieved with its M family of chips).
I fully expect customers will take notice as companies like Dell, HP, and others begin offering laptops with Qualcomm’s new processors.
Qualcomm is well-positioned to thrive in the Arm-based Windows PC era. Snapdragon processors are renowned for their energy efficiency and integration of advanced connectivity features.
As Windows PCs increasingly emphasize mobility and connectivity, Qualcomm’s expertise in delivering efficient, connected solutions makes them a natural fit for the ARM architecture. The Snapdragon-powered Windows PCs will likely excel in always-on connectivity, extended battery life, and seamless integration with 5G networks, enhancing the overall user experience.
As Arm-based Windows PCs gain prominence in 2024, AMD and Qualcomm stand out as beneficiaries of this transformative shift.
These chipmakers’ commitment to advancing processor technology aligns with the demands of an evolving market, offering consumers a new era of Windows computing characterized by enhanced performance, energy efficiency, and seamless connectivity.
The democratization of AI solutions will accelerate, especially with PCs.
AMD and Qualcomm stand to gain significantly as the technology sector eagerly awaits the rise of Arm-based Windows PCs as key new devices in the market. This transition signifies a departure from the traditional x86 architecture, positioning these companies at the forefront of this evolving landscape.
What used to be the province that only large government agencies or even countries that could afford or fund these types of programs, AI-based data is fueling the kind of supercomputing power at a lower cost, unthinkable just a few years ago.
Enabled by its Instinct MI300 Series accelerators, AMD is embracing an industry standards approach that allows it to provide a stronger value proposition from a pricing standpoint versus companies like Nvidia, which is a leader in this space but tends to have a more costly proprietary orientation.
AMD is also betting that its Ryzen AI family of Threadripper processors that enable local AI applications with its Zen architecture will broadly appeal to users. While this could be a game-changer in the CPU market, much of this will depend on Microsoft — who will carry the industry’s water on messaging — successfully convincing users that AI truly matters at the PC level.
This process will take a long time as PC OEMs often fragment their messaging on big innovation topics, unlike Apple, which speaks in a singular voice.
Regardless, the collaboration between AMD and Microsoft in embracing Arm for Windows PCs aligns with the efficiency and performance goals that both companies have been striving to achieve. AMD’s expertise in delivering powerful yet power-efficient processors positions the company as a critical player in the Arm-based Windows PC landscape.
Consumers can expect AMD-powered devices to provide a balance of high-performance computing and energy efficiency, catering to a diverse range of applications from gaming to productivity.
Qualcomm’s Snapdragon X Elite chips bring significant benefits from an AI standpoint. These advanced chips leverage AI-enhanced capabilities to optimize network performance, ensuring efficient data transmission and connectivity.
Integrating machine learning algorithms enhances the overall user experience by predicting network conditions, reducing latency, and improving the reliability of AI applications on mobile devices. The Snapdragon X Elite chips thus contribute to a seamless and intelligent AI-driven connectivity experience, positioning Qualcomm at the forefront of mobile AI innovation.
As Arm-based Windows PCs gain prominence in 2024, AMD and Qualcomm stand out as beneficiaries of this transformative shift. Their commitment to advancing processor technology aligns with the demands of an evolving market, offering consumers a new era of Windows computing characterized by enhanced performance, energy efficiency, and seamless connectivity.
Intel, a once-dominant force in the tech industry, especially during the 1990s and early 2000s, is facing new challenges despite its latest foray into AI-based CPU technology.
Although Intel continues to possess significant market share in the PC industry, which positions it potentially as a key player in AI-enabled laptops and desktops, the company seems to be grappling with issues related to vision and overall user excitement.
Given the current financial pressures and evolving market dynamics, it raises questions about Intel’s ability to meet these new challenges effectively. This shift in Intel’s market position reflects the rapidly changing landscape of the technology sector, where innovation and adaptability are crucial. It’s not 2002 anymore.
The maturation of 5G continues, transforming connectivity.
The benefits hyperbole of 5G that began in 2018 was clumsy and ham-fisted in many ways. The initial rollout of 5G faced several challenges, contributing to an inelegant launch. Infrastructure deployment complexities, varying global standards, and the need for significant investments hampered a smooth transition.
Issues like limited device availability and inconsistent coverage led to a fragmented user experience. Coordinating efforts across telecom operators and addressing compatibility concerns further complicated the early stages of 5G implementation, resulting in a less-than-ideal and somewhat disjointed launch.
But things are getting a lot brighter, which could finally unleash the potential of 5G.
The rollout of 5G networks has begun, ushering in faster and more reliable connectivity. In 2024, anticipate the widespread maturation of 5G technology, enabling a seamless and interconnected world.
Beyond faster smartphone internet speeds, 5G will underpin the growth of the internet of things (IoT), autonomous vehicles, and smart cities.
Integrating 5G into various industries will lead to enhanced capabilities, such as real-time remote surgeries, augmented reality experiences, and smart infrastructure management.
Moreover, research into 6G technology will likely gain momentum, setting the stage for even faster and more efficient communication systems in the latter half of the decade.
The global sustainability movement is shown to have goofed by myopically focusing on EVs.
Hybrid cars represent a pragmatic transitional choice for steering the automotive industry away from traditional gas-based vehicles, providing a middle ground that addresses immediate environmental concerns while easing the shift towards fully electric vehicles (EVs).
There are several reasons why hybrids might have been a more sensible option during this transitional phase.
Hybrid-based cars offer a gradual shift towards cleaner transportation without requiring an extensive infrastructure overhaul. Unlike fully electric vehicles, hybrids can rely on existing gas stations and refueling infrastructure, making them more convenient for consumers and reducing the pressure on rapid charging network development.
Next, one of the primary concerns with early electric vehicles is range anxiety due to limited charging infrastructure and the time it takes to recharge. Hybrids alleviate this concern by incorporating an internal combustion engine and an electric motor. This dual power source provides a safety net, ensuring drivers won’t be stranded if they exceed the electric range.
Environmentally, while hybrids still rely on internal combustion engines, their integration of electric propulsion reduces overall emissions and improves fuel efficiency. This incremental reduction in environmental impact allows for a more gradual transition away from traditional gas-based vehicles, providing manufacturers and consumers time to adapt without sacrificing immediate gains in fuel efficiency.
Finally, there is the subject of cost. Without substantial tax subsidies at the state and federal levels, EVs are a challenging value proposition for most consumers. Subsidy-free EVs cost $60,000 or more, limiting their appeal to many consumers.
Hybrids often come with a lower upfront cost compared to fully electric vehicles. This affordability makes them a more accessible option for a broader range of consumers, encouraging a faster uptake and contributing to a more widespread reduction in emissions. Generally, the cost difference between a hybrid-based car versus a gasoline-only powered cost is approximately $5,000.
Focusing on hybrid cars as a transitional step offers a practical and balanced approach to moving away from traditional gas-based vehicles. By addressing concerns related to infrastructure, range anxiety, environmental impact, and cost, hybrids pave the way for a smoother and more inclusive transition towards a more sustainable automotive future.
Unfortunately, the myopic focus on EVs has turned off many consumers, proving once again that indifference to addressing the challenges above with EVs has backfired as hybrids have represented a much more reasonable approach to transition the auto industry.
Apple’s credibility will be put to the test when Vision Pro finally ships.
Extended reality (XR), encompassing virtual reality (VR), augmented reality (AR), and mixed reality (MR), is set to redefine how we experience the digital and physical worlds.
XR technologies will mature in 2024, offering more immersive and interactive experiences. From virtual meetings and remote collaboration to augmented reality applications in education and training, XR will bridge the gap between the digital and physical realms.
Whether Apple likes it or not, the company will become the poster child for the future success or failure of the entire AR/VR category once Vision Pro begins shipping in the February or March timeframe (if the industry rumors are true).
As I’ve maintained before, Apple’s “spatial computing” approach will depend mainly on the “killer” app getting developed, which I believe will not come from Apple.
The gaming industry, in particular, will see a surge in XR integration, creating more lifelike and engaging virtual worlds. Additionally, XR will find applications in health care, allowing for advanced medical simulations and remote patient monitoring. As the technology evolves, XR will play an increasingly vital role in enhancing our understanding of and engagement with the world around us.
Unquestionably, AI is poised to become an integral part of our daily lives in 2024, transcending its role as a behind-the-scenes technology. Advances in natural language processing and computer vision will make AI more accessible and user-friendly, leading to a more intuitive and humanized interaction with technology.
AI will continue to play a pivotal role in health care diagnostics, personalized education, and content creation. Conversational AI, powered by sophisticated language models, will redefine customer service and automate various business processes. However, as AI becomes more omnipresent, ethical considerations surrounding data privacy, bias, and accountability will demand increased attention and regulation.
2024 promises to be a pivotal year in technology, with the emergence of AI-based PCs and the ripening of 5G, AI, and AR/VR/XR emerging as the driving forces behind transformative change.
As these innovations unfold, the challenge will be to navigate the ethical and societal implications, ensuring that we harness the benefits of technology for the greater good.
The future is exciting, and as we embrace these technological advancements, we must also remain vigilant in shaping a future that is sustainable and conducive to the well-being of humanity.
Generative artificial intelligence headlines dominated 2023 and looks to do the same in 2024. Predictions from thought leaders at CrowdStrike, Intel 471, LastPass and Zscaler forecast how the technology will be used, abused and leveraged in surprising ways in the year ahead.
Curated perspectives assembled here go beyond the potential and perils of AI in 2024 and include how the technology will impact workforces, attack surfaces and create new data insecurities as companies struggle to manage new large language model (LLM) data pools.
As the boom of AI continues, the cybersecurity stakes are raised into the new year with a U.S. presidential election in November, a continued skills gap for the cybersecurity sector to contend with and the rise of ransomware threats - once again - worrying infosec pros.
2024 will bring a serious cyberattack or data breach related to AI
Mike Lieberman, Kusari CTO and cofounder, Kusari:
The rush to capitalize on the productivity benefits of AI has led to teams cutting corners on security. We’re seeing an inverse correlation between an open source AI/ML project’s popularity and its security posture. On the other end, AI will help organizations readily address cybersecurity by being able to detect and highlight common bad security patterns in code and configuration. Over the next few years, we will see AI improving to help provide guidance in more complex scenarios. However, AI/ML must be a signal – not a decision maker.
AI is a wild card
Michael DeBolt, chief intelligence officer, Intel 471:
While there doesn’t appear to be a killer AI application for cybercriminals thus far, its power could be helpful for some of the mundane backend work that cybercriminals perform.
The advent of cybercrime-as-a-service, which is the term for the collective goods and services that threat actors supply to each other, is marked by an emphasis on specialization, scale and efficiency. For example, using LLMs to sort through masses of stolen data to figure out the most important data to mention when extorting a company. Or employing a chatbot to engage in preliminary ransom negotiations.
Another hypothetical innovation could be an AI tool that calculates the maximum ransom an organization will pay based on the data that is stolen. We reported a few examples of actors implementing AI in their offers during the second quarter of 2023, which included an initial access broker (IAB) offering free translation services using AI. In May 2023, we reported a threat actor offering a tool that allegedly could bypass ChatGPT’s restrictions.
AI and ML tools are capable of enabling impersonation via video and audio, which pose threats to identity and access management. Videos rendered using AI are fairly detectable now, but synthesized voice cloning is very much a threat to organizations that use voice biometrics as part of authentication flows. We still assess that AI cannot be fully relied upon for more intricate cybercrime, and doing so in its current form will likely render flawed results. But this area is moving so swiftly it's difficult to see what is on the horizon.
The proliferation of open-source LLMs and services — some of which are being built with the intention of not having safety guardrails to prevent malicious use — means this area remains very much a wild card.
AI blind spots open the door to new corporate risks
Elia Zaitsev, CTO, CrowdStrike:
In 2024, CrowdStrike expects that threat actors will shift their attention to AI systems as the newest threat vector to target organizations, through vulnerabilities in sanctioned AI deployments and blind spots from employees’ unsanctioned use of AI tools.
After a year of explosive growth in AI use cases and adoption, security teams are still in the early stages of understanding the threat models around their AI deployments and tracking unsanctioned AI tools that have been introduced to their environments by employees. These blind spots and new technologies open the door to threat actors eager to infiltrate corporate networks or access sensitive data.
Critically, as employees use AI tools without oversight from their security team, companies will be forced to grapple with new data protection risks. Corporate data that is inputted into AI tools isn’t just at risk of threat actors targeting vulnerabilities in these tools to extract data, the data is also at risk of being leaked or shared with unauthorized parties as part of the system’s training protocol.
2024 will be the year when organizations will need to look internally to understand where AI has already been introduced into their organizations (through official and unofficial channels), assess their risk posture, and be strategic in creating guidelines to ensure secure and auditable usage that minimizes company risk and spend but maximizes value.
GenAI will level-up the role of security analysts
Chris Meenan, vice president product management, IBM Security:
Companies have been using AI/ML to Excellerate the efficacy of security technologies for years, but the introduction of generative AI will be aimed squarely at maximizing the human element of security. In this coming year, GenAI will begin to take on certain tedious, administrative tasks on behalf of security teams — but beyond this, it will also enable less experienced team members to take on more challenging, higher level tasks. For example, we'll see GAI being used to to translate technical content, such as machine generated log data or analysis output, into simplified language that is more understandable and actionable for novice users. By embedding this type of GAI into existing workflows, it will not only free up security analysts time in their current roles, but enable them to take on more challenging work — alleviating some of the pressure that has been created by current security workforce and skills challenges.
Increase in sophisticated, personalized phishing and malware attacks
Ihab Shraim, CTO, CSC Digital Brand Services:
Phishing and malware continue to be the most used cyber threat vectors to launch attacks for fraud and data theft, especially when major events occur, and frenzied reactions are abound. In 2024, with the new rise of generative AI usage such as FraudGPT, cybercriminals will have a huge advantage launching phishing campaigns with speed combined with sophistication. ChatGPT will allow bad actors to craft phishing emails that are personalized, targeted, and free of spelling and grammatical errors which will make such emails harder to detect. Moreover, dark web AI tools will be easily available to allow for more complex, socially engineered deepfake attacks that manipulate the emotions and trust of targets at even faster rates.
Beyond phishing, the rise of LLMs will make the endpoint a prime target for cybercriminals in 2024
Dr. Ian Pratt, global head of security for personal systems at HP Inc.:
One of the big trends we expect to see in 2024 is a surge in use of generative AI to make phishing lures much harder to detect, leading to more endpoint compromise. Attackers will be able to automate the drafting of emails in minority languages, scrape information from public sites such as LinkedIn to pull information on targets, and create highly personalized social engineering attacks en masse. Once threat actors have access to an email account, they will be able to automatically scan threads for important contacts and conversations, and even attachments, sending back updated versions of documents with malware implanted, making it almost impossible for users to identify malicious actors. Personalizing attacks used to require humans, so having the capability to automate such tactics is a real challenge for security teams. Beyond this, continued use of ML-driven fuzzing, where threat actors can probe systems to discover new vulnerabilities. We may also see ML-driven exploit creation emerge, which could reduce the cost of creating zero-day exploits, leading their greater use in the wild.
Simultaneously, we will see a rise in "AI PCs," which will revolutionize how people interact with their endpoint devices. With advanced compute power, AI PCs will enable the use of “local Large Language Models (LLMs)” — smaller LLMs running on-device, enabling users to leverage AI capabilities independently from the internet. These local LLMs are designed to better understand the individual user’s world, acting as personalized assistants. But as devices gather vast amounts of sensitive user data, endpoints will be a higher risk target for threat actors.
As many organizations rush to use LLMs for their chatbots to boost convenience, they open themselves up to users abusing chatbots to access data they previously wouldn’t have been able to. Threat actors will be able to socially engineer corporate LLMs with targeted prompts to trick them into overriding its controls and giving up sensitive information — leading to data breaches.
The menace – and promise – of AI
Alex Cox, director of LastPass’ threat intelligence, mitigation and escalation team:
AI is trending everywhere. We’ll continue to see that in 2024 and beyond. The capabilities unlocked by AI, from GenAI to deepfakes, will completely shift the threat environment.
Take phishing as an example. An obvious “tell” of a phishing email is imperfect grammar. Anti-phishing technologies are built with these errors in mind and are trained to find them. However, generative AI like ChatGPT has significantly fewer shortcomings when it comes to language — and, as malicious actors take advantage of these tools, we’re already beginning to see the sophistication of these attacks increase.
AI’s big data capabilities will also be a boon to bad actors. Attackers are excellent at stealing troves of information — emails, passwords, etc. — but traditionally they’ve had to sift through it to find the treasure. With AI, they’ll be able to pull a needle from a haystack instantaneously, identifying and weaponizing valuable information faster than ever.
Thankfully, security vendors are already building AI into their tools. AI will help the good guys sift through big data, detect phishing attempts, provide real-time security suggestions during software development and more.
Security professionals should help their leadership understand the AI landscape and its potential impacts on their organization. They should update employee education, like anti-phishing training and communicate with vendors about how they are securing against these new capabilities.
There will be a transition to AI-generated tailored malware and full-scale automation of cyberattacks
Adi Dubin, vice president of product management, Skybox Security:
Cybersecurity teams face a significant threat from the rapid automation of malware creation and execution using generative AI and other advanced tools. In 2023, AI systems capable of generating highly customized malware emerged, giving threat actors a new and powerful weapon. In the coming year, the focus will shift from merely generating tailored malware to automating the entire attack process. This will make it much easier for even unskilled threat actors to launch successful attacks.
Securing AI tools to challenge teams
Dr. Chaz Lever, senior director, security research, Devo
It’s been a year since ChatGPT hit the scene, and since its debut, we’ve seen a massive proliferation in AI tools. To say it’s shaken up how organizations approach work would be an understatement. However, as organizations rush to adopt AI, many lack a fundamental understanding of how to implement the right security controls for it.
In 2024, security teams biggest challenge will be properly securing the AI tools and technologies their organizations have already onboarded. We’ve already seen attacks against GenAI models such as model inversion, data poisoning, and prompt injection; and as the industry adopts more AI tools, AI attack surfaces across these novel applications will expand. This will pose a couple challenges: refining the ways AI is used to help Excellerate efficiency and threat detection while grappling with the new vulnerabilities these tools introduce. Add in the fact that bad actors are also using these tools to help automate development and execution of new threats, and you’ve created an environment ripe for new security incidents.
Just like any new technology, companies will need to balance security, convenience, and innovation as they adopt AI and ensure they understand the potential repercussions of it.
From threat prevention to prediction, cybersecurity nears a historic milestone
Sridhar Muppidi, CTO, IBM Security
As AI crosses a new threshold, security predictions at scale are becoming more tangible. Although early security use cases of generative AI focus on the front end, improving security analysts’ productivity, I don’t think we’re far from seeing generative AI deliver a transformative impact on the back end to completely reimagine threat detection and response into threat prediction and protection. The technology is there, and the innovations have matured. The cybersecurity industry will soon reach a historic milestone, achieving prediction at scale.
The democratization of AI tools will lead to a rise in more advanced attacks against firmware and even hardware
Boris Balacheff, chief technologist for system security research and innovation at HP Inc.:
In 2024, powerful AI will be in the hands of the many, making sophisticated capabilities more accessible at scale to malicious actors. This will not only accelerate attacks in OS and application software, but also across more complex layers of the technology stack like firmware and hardware. Previously, would-be threat actors needed to develop or hire very specialist skills to develop such exploits and code, but the growing use of Generative AI has started to remove many of these barriers. This democratization of advanced cyber techniques will lead to an increase in the proliferation of more advanced, more stealthy, or more destructive attacks. We should expect more cyber events like moonbounce and cosmic strand, as attackers are able find or exploit vulnerabilities to get a foothold below a device Operating System. latest security research even shows how AI will enable malicious exploit generation to create trojans all the way into hardware designs, promising increased pressure in the hardware supply chain.
The political climate will continue to be in unchartered waters with disinformation, deep fakes and the advancement of AI
Ed Williams, regional VP of pen testing, EMEA at Trustwave:
In the U.S. election, databases were leaked in the past, and one can only assume that an attempt or possible success of a cyberattack will happen again.
AI has the ability to spread disinformation via deep fakes and in 2024 this will only continue to explode. Similarly, deep fakes and other misinformation are already prevalent today. This shows that many people do not check for authenticity, and what they see on their phones and social media becomes their idea of the truth, which only amplifies the impact. There is discourse on both sides of the aisle, particularly ahead of party elections. This alone will create an environment susceptible to spreading misinformation and encourage nation-states to interfere where they can.
Enhanced phishing tools will Excellerate social engineering success rates
Nick Stallone, senior director, governance, risk and compliance leader, MorganFranklin Consulting:
2024 will see broader adoption of automated and advanced spear phishing/vishing tools. These tools, combined with enhanced and more accessible deep fake and voice cloning technology, will vastly Excellerate social engineering success rates. This will lead to increased fraud and compromised credentials perpetrated through these methods. All industries must be aware of these improved methods and focus on incorporating updated controls, awareness, and training to protect against them as soon as possible.
On the other side, cybersecurity tools that incorporated machine learning and artificial intelligence over the past few years will also become more efficient in protecting organizations from these threats. The training models associated with these tools will have access to more data from increased adoption, leading to shorter implementation periods and exponential market growth in 2024. These tools will bring the greatest level of efficiencies and reduced costs for cybersecurity monitoring and assessments, allowing security teams to be more focused on their organization’s greatest risks.
Democratization of AI will be a double-edged sword for cybersecurity
Atticus Tysen, SVP and chief information security officer, Intuit:
While the democratization of AI shows great promise, its widespread availability poses an unprecedented challenge for cybersecurity. AI will evolve specific attacks against enterprises to become continuous, ubiquitous threats against businesses, individuals, and the infrastructure they rely upon. Even still, it will be a race against the threat actors to design resilient systems and protections. If we fail, the risk of successful hacks becoming commonplace and wreaking havoc in the near future is a clear and present danger.
In 2024, English will become the best programming language for evil
Fleming Shi, CTO, Barracuda:
It was no surprise that coming into 2023, generative AI would be integrated into security stacks and solutions. However, the big surprise was how quickly generative AI has taken over every aspect of the technology space. This is concerning as we enter into 2024 because just as security professionals are using the new technology to add to their defenses, bad actors are doing the same. LLMs are extremely capable at writing code, but often come with guardrails that prevent it from writing malicious code. However, generative AI can be “fooled” into helping threat actors anyway – particularly when it comes to social engineering techniques. Rather than telling the tool to create an email phishing template, one only has to ask it to write a letter from a CEO asking for payment for an invoice. The slight changes in phrasing make these tools vulnerable, generally available, and extremely useful to bad actors everywhere. Because this process is so easy, 2024 will be the year that English becomes the best programming language for evil.
AI attacks to supply new meaning to 'garbage in, garbage out'
Dave Shackleford, faculty and IANS Research, founder and principal consultant:
In 2024, we will definitely see emerging attacks against machine learning and AI models and infrastructure. With more and more organizations relying on cloud-based AI processing and data models, it’s likely that attackers will begin targeting these environments and data sets with disruptive attacks as well as data pollution strategies. Today, we have almost nothing in terms of defined attack paths and strategies for this within frameworks like MITRE ATT&CK, but that will likely change in the next year. These attacks will supply new meaning to the classic maxim of “garbage in, garbage out” and we will need to learn to identify and defend against them.
Language models pose dual threats to software security
Andrew Whaley, the senior technical director, Promon:
Large language models (LLMs) have come a remarkable way over the past year; and with this so have bad actors’ reverse engineering capabilities. This poses two main threats: 1. Reverse engineering is now far easier, providing fledgling hackers with the capabilities typically associated with specialists. 2. The reduced effectiveness of traditional protection methods against automated deobfuscation attacks. This increases software vulnerability to malicious exploitation, and will lead to an expected rise in incidents, including high-value attacks. Examples include mass attacks against mobile banking apps, remote mobile OS takeover, and malware targeting smart devices.
Deep faked CEOs
Navroop Mitter, CEO of ArmorText:
Dramatic improvements in the quality of generated voice and video of real-life persons coupled with further improvements in GenAI and LLMs to automatically assess and replicate the nuances of how individuals communicate, both orally and in written form, will enable novel attacks for which most organizations are severely underprepared.
Over the next 24 months organizations will face attackers mimicking their executives not just by email spoofing, but perfect AI driven mimicry of their voice, likeness, and diction and this will present multiple challenges, but most especially during incident response. How will companies distinguish between the Real McCoy and a near perfect imposter amidst the chaos of a crisis?
Existing policies and procedures designed around handling rogue executives won’t apply because the Real McCoy is still present and very much needed in these conversations.
Businesses will need to learn to hide their attack surface at a data level
Sam Curry, VP and CISO at Zscaler:
The influx of generative AI tools such as ChatGPT has forced businesses to realize that if their data is available in the cloud/internet, then it can be used by generative AI and therefore competitors. If organizations want to avoid their IP from getting utilized by gen AI tools then they will need to ensure their attack surface is now hidden on a data level rather than just at an application level.
Based on the rapid adoption of gen AI tools we predict businesses will accelerate their efforts to classify all their data into risk categories and implement proper security measures to prevent leakage of IP.
API security evolves as AI enhances offense-defense strategies
Shay Levi, CTO and co-founder, Noname Security:
In 2023, AI began transforming cybersecurity, playing pivotal roles both on the offensive and defensive security fronts. Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention. AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks. We will witness a pivotal shift as AI becomes a powerful tool for both malicious actors and defenders, redefining the dynamics of digital security.
The emergence of 'poly-crisis' due to pervasive AI-based cyberattacks
Agnidipta Sarkar, VP CISO Advisory, ColorTokens
We saw the emergence of AI in 2022, and we saw the emergence of misuse of AI as an attack vector, helping make phishing attempts sharper and more effective. In 2024, I expect cyberattacks to become pervasive as enterprises transform. It is possible today to entice AI enthusiasts to fall prey to AI prompt injection. Come 2024, perpetrators will find it easier to use AI to attack not only traditional IT but also cloud containers and, increasingly, ICS and OT environments, leading to the emergence of a “poly-crisis” that threatens not only financial impact but also impacts human life simultaneously at the same time in cascading effects. Critical Computing Infrastructure will be under increased threat due to increasing geo-political threat. Cyber defense will be automated, leveraging AI to adapt to newer attack models.
AI developments for threat actors will lead to nearly real-time detection methods
Mike Spanbauer, Field CTO, Juniper Networks:
AI will continue to prove dangerous in the hands of threat actors, accelerating their ability to write and deliver effective threats. Organizations must adapt how they approach defense measures and leverage new, proven methods to detect and block threats. We will see the rise of nearly real-time measures that can identify a potentially malicious file or variant of a known threat at line rate.
AI won't be used for full-fledged attacks, but social engineering attacks to proliferate
Etay Maor, senior director of security strategy, Cato Networks:
No, there won’t be a wave of AI based attacks — while AI has been getting a lot of attention ever since the introduction of ChatGPT, we are not even close to seeing a full-fledged AI based attack. You don’t even have to take my word for it — the threat actors on major cybercrime forums are saying it as well. Hallucinations, model restrictions, and the current maturity level of LLMs are just some of the reasons this issue is actually a non-issue at this point in time.
But, we should expect to see LLMs being used for expediting and perfecting small portions or tasks of the attacks, be it email creation, help with social engineering by creating profiles or documents and more. AI is not going to replace AI, but people who know how to use AI will replace those who don’t.
The balance of attackers and defenders continuously tested
Kayla Williams, CISO, Devo:
This one may be a no-brainer, but it must be said again and again. Bad actors will use AI/ML and other advanced technologies to create sophisticated attack tactics and techniques. They’ll use these tools to pull off more and faster attacks, putting increased pressure on security teams and defense systems. The pace of progress is equally fast on both sides — defenders and attackers — and that balance will continually be tested in the coming year.
AI accelerates social engineering attacks
Kevin O’Connor, head of threat research at Adlumin:
Commercially available and open-source AI capabilities — including Large Language Models (LLMs) like ChatGPT and LLaMA, and countless variants — will help attackers in designing thought out and effective social engineering campaigns. With AI systems increasingly integrating with troves of personal information through social media sites from LinkedIn to Reddit, we’ll see the ability for even low-level attackers to create targeted and convincing social-engineering based campaigns.
Leaders from Paris Baguette, AmCham, KPMG, and HP share their approach to facilitate volunteerism in their employees, hoping to spark the habit into their workforce.
With the new year comes new resolutions. Most would say their “new year, new me” would consist of a healthier diet, or clocking in some time in the gym. However, have you ever considered volunteerism as a hobby that you can inherit over the course of the year?
To inspire, HRO has put together this special feature on how leaders from Paris Baguette, AmCham, KPMG, and HP — partners of the SG Cares Giving Week which took place in December 2023 — are encouraging this as a culture among their employees.
Case study: Paris Baguette
HPE2-N69 Topics | HPE2-N69 information source | HPE2-N69 study help | HPE2-N69 learn | HPE2-N69 mission | HPE2-N69 outline | HPE2-N69 thinking | HPE2-N69 education | HPE2-N69 study tips | HPE2-N69 PDF Download |
Killexams exam Simulator
Killexams Questions and Answers
Killexams Exams List