In an ever-changing field of technology, engineers expect continued learning to be part of the job. Flexibility in engineering often means improving current skill sets and cultivating new ones. And if being able to adapt to changing situations is among the most important skills an engineer can possess, what do engineers perceive as the obstacles to building those skills?
Machine Design’s 2022 Salary Survey, which polled a cross-section of readers, offers a glimpse into the challenges engineers face and the opportunities employers can seize to help right-size their staffing efforts.
Watch this video with Machine Design editors for more insights.
More than 40% of all participants in this year’s survey have at least a bachelor’s degree or higher level of education. This foundational education is often not enough to support ongoing job requirements or competency. Typically, engineers will look to continuing education and training while performing professional duties to stay current.
Focusing on the education-related survey questions, we highlight a few insights into ongoing education formats and the respondents’ preferences.
The survey asked: “What are some of the ways you continue your engineering education?”
Respondents replied: Engineering videos (43.55%), seminars (34.48%), webcasts (37.10%), engineering/technology publications (35.48%), engineering/technology publications websites (31.05%), white papers (33.47%), as well as in-person trade shows and conferences (29.64%) ranked favorably amongst respondents. Low on their list of preferences were in-classroom college and employer-sponsored courses (15.73%), online discussion forums (15.12%) and podcasts (12.10%).
Respondents were then asked to indicate which of the education forms are paid for by their employers. In line with engineers’ learning preferences, the survey showed that employers were likely to pay for employees’ attendance at trade shows and conferences (37.7%), as well as seminars (33.67%). Only 22.78% paid for certifications or college tuition, and 25.81% paid for engineering association dues.
Asked what the biggest challenges were in staying current with engineering information relevant to your work, the most common answer was time. But if respondents were feeling time-strapped, they were further challenged to find information applicable to their job responsibilities. Sifting through useful, relevant information from an abundance of online sources, finding summarized information on emerging technologies and finding specialized courses were just a few of the impediments noted.
Concerns about knowing rapidly changing technology and which is most relevant to the company was a common refrain. “Parsing which technological advances are relevant to my company, and how soon they will be available or reasonably priced,” noted one respondent. Another pointed to the fact that the cost of new technology “is seen by management as too high to incorporate.”
At least a couple of respondents alluded to the generational divide amongst engineers and their ability to adapt technological advancements to their current work environments. “As you get older, you tend to be a little slower to learn new skills, and you need to put more effort into learning and improving,” expressed one respondent. “Young talent is abundant, and they have the advantage of age,” said another.
There were other interesting comments that fed into the survey. One concern was that some publications miss the mark on serving the needs of their engineering audiences, and took umbrage with “dishonest technical publications that try to steer industry toward certain trends.”
Several others said the tasks of “finding accurate information and reviews along with adequate customer specifications” were arduous. “Finding accurate information and reviews along with adequate customer specifications,” was another sentiment echoed on this theme.
All told, perceptions matter. Engineers, like most employees, want to be heard. They want to contribute and want to be part of something meaningful. It all speaks to the appreciation they have for the efforts their employers make to their livelihood and the measure of effort they are willing to invest in their professional growth.
A US Navy nuclear engineer and his wife have each been sentenced to about two decades behind bars for conspiring to sell classified information related to the design of nuclear-powered warships to a foreign country in exchange for thousands of dollars in cryptocurrency.
The Justice Department announced Wednesday that Jonathan Toebbe, 44, of Annapolis, Maryland, was sentenced to more than 19 years. His wife, Diana Toebbe, 46, was sentenced to more than 21 years in prison.
The couple pleaded guilty in February to conspiracy and hoped to get a federal judge in West Virginia to sign off on proposed plea agreements for lesser sentences. Judge Gina Groh rejected the agreements stating that it was not in the best interest of the country to accept the deals in which Jonathan Toebbe could have been sentenced to between 12 and 18 years in prison, while his wife could have been sentenced to up to three years in prison.
The Toebbes pleaded guilty again in September as part of new agreements that exposed them to longer potential sentences, according to court documents.
The couple coordinated drop-offs of encrypted SD cards containing classified information about nuclear submarines, specifically Virginia-class vessels, for who they believed were members of a foreign government in exchange for thousands of dollars in cryptocurrency, according to a criminal complaint.
Prosecutors said the couple went to great lengths to hide the SD cards at the dead-drop locations over the course of several months, tucking an SD card into a Saran-wrapped peanut butter sandwich in one instance, while others were hidden inside a packet of gum and a sealed Band-Aid wrapper.
“The Toebbes conspired to sell restricted defense information that would place the lives of our men and women in uniform and the security of the United States at risk,” said Assistant Attorney General Matthew Olsen of the Justice Department’s National Security Division.
Zero-emissions vehicles, artificial intelligence, and self-charging gadgets are helping remake and update some of the most important technologies of the last few centuries. Personal devices like headphones and remote controls may be headed for a wireless, grid-less future, thanks to a smaller and more flexible solar panel. Boats can now sail human-free from the UK to the US, using a suite of sensors and AI. Chemical factories, energy facilities, trucks and ships are getting green makeovers as engineers figure out clever new ways to make them run on hydrogen, batteries, or other alternative, non-fossil fuel power sources.
Looking for the complete list of 100 winners? Check it out here.
Çanakkale Motorway Bridge Construction Investment Operation
An international team of engineers had to solve several difficult challenges to build the world’s largest suspension bridge, which stretches 15,118 feet across the Dardanelles Strait in Turkey. To construct it, engineers used tugboats to float out 66,000-ton concrete foundations known as caissons to serve as pillars. They then flooded chambers in the caissons to sink them 40 meters (131 feet) deep into the seabed. Prefabricated sections of the bridge deck were carried out with barges and cranes, then assembled. Completed in March 2022, the bridge boasts a span between the two towers that measures an incredible 6,637 feet. Ultimately the massive structure shortens the commuting time across the congested strait, which is a win for everyone.
When carrying a full load of rock, the standard issue Komatsu 930E-5 mining truck weighs over 1 million pounds and burns 800 gallons of diesel per work day. Collectively, mining trucks emit 68 million tons of carbon dioxide each year (about as much as the entire nation of New Zealand). This company’s solution was to turn to hydrogen power, and so Anglo American hired American contractor First Mode to hack together a hydrogen fuel cell version of their mining truck. It’s called NuGen. Since the original Komatsu truck already had electric traction motors, powered by diesel, the engineers replaced the fossil-fuel-burning engine with eight separate 800-kw fuel cells that feed into a giant 1.1 Mwh battery. (The battery further recaptures power through regenerative braking.) Deployed at a South African platinum mine in May, the truck refuels with green hydrogen produced using energy from a nearby solar farm.
Hydrogen can be a valuable fuel source for decarbonizing industrial processes. But obtaining the gas at scale requires using energy from natural gas to split water into hydrogen and oxygen with electrical currents. To be sustainable, this process needs to be powered with renewables. That’s the goal of an industrial consortium in Spain, comprised of the four companies listed above. It’s beginning work on HyDeal España, set to be the world’s largest green hydrogen hub. Solar panels with a capacity of 9.5 GW will power electrolysers that will separate hydrogen from water at an unprecedented scale. The project will help create fossil-free ammonia (for fertilizer and other purposes), and hydrogen for use in the production of green steel. The hub is scheduled to be completed in 2030, and according to its estimates, the project will reduce the greenhouse gas footprint of Spain by 4 percent.
Art students will often mimic the style of a master as part of their training. DALL-E 2 by Open AI takes this technique to a scale only artificial intelligence can achieve, by studying hundreds of millions of captioned images scraped from the internet. It allows users to write text prompts that the algorithm then renders into pictures in less than a minute. Compared to previous image generators, the quality of the output is getting rave reviews, and there are “happy accidents” that feel like real creativity. And it’s not just artists—urban planning advocates and even a reconstructive surgeon have used the tool to visualize rough concepts.
When the first Candela P12 electric hydrofoil goes into service next year in Stockholm, Sweden, it will take commuters from the suburbs to downtown in about 25 minutes. That’s a big improvement from the 55 minutes it takes on diesel ferries. Because the P12 produces almost no wake, it is allowed to exceed the speed restrictions placed on other watercraft; it travels at roughly 30 miles per hour, which according to the company makes it the world’s fastest aquatic electric vessel. Computer-guided stabilization technology aims to make the ride feel smooth. And as a zero-emissions way to avoid traffic congestion on bridge or tunnel chokepoints without needing to build expensive infrastructure, the boats are a win for transportation.
Petrochemical plants typically require acres of towering columns and snaking pipes to turn fossil fuels into useful products. In addition to producing toxic emissions like benzene, these facilities put out 925 million metric tons of greenhouse gas every year, according to an IEA estimate. But outside Houston, Solugen built a “Bioforge” plant that produces 10,000 tons of chemicals like fertilizer and cleaning solutions annually through a process that yields zero air emissions or wastewater. The secret sauce consists of enzymes: instead of using fossil fuels as a feedstock, these proteins turn corn syrup into useful chemicals for products much more efficiently than conventional fossil fuel processes– and at a competitive price. These enzymes even like to eat pieces of old cardboard that can’t be recycled anymore, turning trash into feedstock treasure. Solugen signed a deal this fall with a large company to turn cardboard landfill waste into usable plastics.
Institute for Lightweight Structures and Conceptual Design (ILEK), University of Stuttgart
Air conditioners and fans already consume 10 percent of the world’s electricity, and AC use is projected to triple by the year 2050. But there are other ways to cool a structure. Installed in an experimental building in Stuttgart, Germany, an external facade add-on called HydroSKIN employs layers of modern textiles to update the ancient technique of using wet cloth to cool the air through evaporation. The top layer is a mesh that serves to keep out bugs and debris. The second layer is a thick spacer fabric designed to absorb water—from rain or water vapor when it’s humid out—and then facilitate evaporation in hot weather. The third layer is an optional film that provides additional absorption. The fourth (closest to the wall of the building) is a foil that collects any moisture that soaks through, allowing it to either be stored or drained. A preliminary estimate found that a single square meter of HydroSKIN can cool an 8x8x8 meter (26x26x26 feet) cube by 10 degrees Kelvin (18 degrees F).
Consumer electronics in the U.S. used about 176 terawatt hours of electricity in 2020, more than the entire nation of Sweden. Researchers at the Swedish company Exeger have devised a new architecture for solar cells that’s compact, flexible, and can be integrated into a variety of self-charging gadgets. Silicon solar panels generate power cheaply at massive scale, but are fragile and require unsightly silver lines to conduct electricity. Exeger’s Powerfoyle updates a 1980s innovation called dye-sensitized solar cells with titanium dioxide, an abundant material found in white paint and donut glaze, and a new electrode that’s 1,000 times more conductive than silicon. Powerfoyle can be printed to look like brushed steel, carbon fiber or plastic, and can now be found in self-charging headphones by Urbanista and Adidas, a bike helmet, and even a GPS-enabled dog collar.
Collecting data in the corrosive salt waves and high winds of the Atlantic can be dull, dirty, and dangerous. Enter the Mayflower, an AI-captained, electrically-powered ship. It has 30 sensors and 16 computing devices that can process data onboard in lieu of a galley, toilets, or sleeping quarters. After the Mayflower successfully piloted itself from Plymouth in the UK to Plymouth, MA earlier this year—with pit stops in the Azores and Canada due to mechanical failures—the team is prepping a vessel more than twice the size for a longer journey. The boat is designed to collect data on everything from whales to the behavior of eddies or gyres at a hundredth the cost of a crewed voyage and without risking human life. The next milestone will be a 12,000 mile trip from the UK to Antarctica, with a return trip via the Falkland Islands.
In Oregon, the Wheatridge Renewable Energy Facilities, co-owned by NextEra Energy Resources and Portland General Electric (PGE), is combining solar, wind, and battery storage to bring renewable energy to the grid at utility scale. Key to the equation are those batteries, which stabilize the intermittency of wind and solar power. All told, it touts 300 megawatts of wind, 50 megawatts of solar, and 30 megawatts of battery storage capable of serving around 100,000 homes, and it’s already started producing power. The facility is all part of the Pacific Northwestern state’s plan to achieve 100-percent carbon-free electricity by 2040.
Correction on Dec. 2, 2022: This post has been updated to correct an error regarding the date that the suspension bridge in Turkey was completed.
Arista Networks, Inc. (NYSE:ANET) Wells Fargo 6th Annual TMT Conference November 30, 2022 5:30 PM ET
Aaron Rakers - Wells Fargo
Conference Call Participants
Anshul Sadana - COO
So thanks for joining us. Looking forward to the conversation. I wanted to just start, just talking a little bit about the evolution of a recipe you guys recently and if anybody's not seen the slide deck from the Analyst Day, that they hosted a great Analyst Day event just a month or so ago, but one of the things the messages I felt coming out of the company was that the company started as a product company, right, and you're evolving into a much bigger platform company.
Q - Aaron Rakers
So, maybe that at a high level, maybe you can help us appreciate what that means, what that evolution looks like as we think about the next five years or whatever it might be time arise and wise, on that platform journey for the company.
Absolutely, Aaron. When we started, we are very focused on just data centers switchers and all of you largely have thought about us as this hardware box company. We sell boxes with some software on it and that software is fairly complex, right. It's about US and television the US and US as our operating system, added together at least 25 million lines of code today and is highly programmable and a very, very high quality stock.
So when you look at IT operations and networking as part of IT operations, no longer just a silo. It has to be for integrated with everything that the teams are doing because provisioning and automation and security and business outcomes are all interrelated.
So the way we've developed our stack, just now to a point where US is truly becoming a platform, as you can run it on any of the hardware we have, but in the end, you can program it. You can use it the way you want. Cloud vision, which is our automation stack, is not just about automating the network. Now we are helping enterprises automate the automation. There's more integration, not on our integration happening between cloud vision and all the other provision systems that exist.
So getting into the deep glue of enterprise IT, that's where this is truly becoming a platform and it's easy for people to deploy it, they love the product already and I think there are long ways to go from here into many of the other used cases that all adjacent from right here.
Yeah and that kind of you mentioned cloud vision, but the EOS thickness of what you're doing, everything wraps around a US, right, it's an unmodified Linux Kernel, how far can you take this? Maybe talk a little bit about some of the adjacencies that you're most excited about right now.
I would say one thing about EOS that is maybe not always appreciated is how do we get the high quality? The architecture is already great, that it's foundational to everything that we build, but the way networking has worked in the industry for decades is you write a feature and then you throw it over the wall to a test team and they test it and they find some bugs. They find the bugs, develop being fixes the bugs. Then you test a little bit more. Then you ship the power to the customer.
The way we think of this is how do you build a test framework where the product always works? So instead of having thousands of QA engineers, the size of the test team, the QA team at Arista today is eight megawatts. Our software development team writes all the test cases, fully automated infrastructure. We scale that to a point where we run about 150,000 test cases every day on every product, on every release we've ever shipped and that keeps on improving on its own because if there's a bug, you are likely to find it and then you add the test case and that bug will never escape again to our customer. Very different than what our competition has been doing for two or three decades.
When you take that, that's the customer's love it and it just works and now look at other used cases. We constantly get pulled by our customers into other segments saying if I can run EOS and cloud vision, why can't you do campus networking for me, and that's really how we got into campus. Had they enough pulled from the customer interest in there, the same thing is not happening in a little bit in the routed band environment. Not just inside the building, but how do you leave the building and how do you go out?
There's good opportunities there. We've already been doing some of the work and routing with our cloud customers and some of the service providers, we can apply this to enterprises as well and the same is true and applying other layers on top, for example, awake and NDR is a massive strength in our product portfolio.
That's never going to be the primary revenue driver, but what we do with our network detection and response system is build a network where you can detect threads based on what's going through the network and that's just an extra layer of protection that today doesn't exist. The customers love that as well and that's integrated into the campus products from day one.
So when you buy an Arista Campus switch, it has threat deduction already built in. You turn on a license and you get that feature and so on. So I think there's a lot to be done there and that's how we grow in the adjacencies too.
Yeah -- but that's the kind of that. When you think about that competitively, the competitors are still fragmented across their product portfolio. So from a competitive perspective, the durability of what you're talking about, appears extremely strong, right. You don't see the competitive landscape being able to moderate or change their path forward very effectively to compete against that single operating system approach.
I think that are right and we have some good competitors as well. So it's not as if this is an industry that is lax and there's no competition. It's very intense competition, but to a great extent, you're right, I think competition are always ahead of the execution. For us our execution is ahead and we want to keep it that way.
So shift gears a little bit, so I don't think you'd ever get into a discussion with a financial analyst that wouldn't ask you about the demand environment and one of the things that we're seeing in networking is this this kind of question of networking spend seems to be so far fairly resilient and there's been some concern and choppiness overall macro dynamics and stuff like that. So, as you look at it, how are you currently seeing or how would you characterize the demand environment and certainly segwaying that into kind of the visibility that you see from the customers as you engage with them.
Whenever this crisis, everyone has to prioritize and today there might be a macro and already happening, but in the end, companies cut back on free lunch and drinks and free laundry, not on networking because networking is essential. You cannot really make two of your business and run your workloads without enough bandwidth and the right connectivity and security and so on and between.
We are already seeing that. Our customers are telling us this is critical to the infrastructure. In addition, I think networking is this one place where we glue everything together and if you become a bottleneck, it significantly reduces the efficiency of the entire infrastructure. Regarding spending so much on computer, storage or other things, you might as well spend a little bit more to have ample bandwidth in between the right connectivity, the right segmentation, so that you don't disrupt any of your business floors. So we think some of that.
The same applies to cloud companies. The cloud companies might slowdown in how many racks. They're adding to their third data center depending on demand, but they do not slowdown in DCI build-out if they have to build a new region, they will do the DCI build out upfront, the data center interconnect to get to the right outcome for that region. So we are seeing healthy demand as a result of that for networking and specifically our products.
Yeah and is -- how about that same kind of question succinctly to the enterprise piece of the business? That's one thing in the aristocracy. A lot of people ask you about the cloud and I'm certain that we'll talk more about the cloud, but the enterprise momentum that you've seen has been fairly remarkable. How do you characterize the enterprise demand environment that you're seeing right now.
So look, for enterprise, there are two big pieces of our business. One is the enterprise data center. It continues to be healthy. We've seen good growth. We continue to see good growth over there. You have campus, which we say call it as an adjacency, but it's growing really well. It's starting to near roughly 10% of our annual revenue. So to at some point and with the near future, it may not be just an adjacency. It will become our core business. It's growing very, very well.
Because of the way we're exposed to enterprises through large enterprises, which I think either feel the -- or macro event later in its cycle or certainly don't try to cut back as much or campus, we have such little penetration and good growth, but we won't feel the macro upfront. I think we'll be the last company to find out their macro event on the first one. That demand has been good and sales teams are opportunistic, right?
They'll find the customer that is still willing to spend a lot of money and choose that as the best opportunity and what's happening in campus is there was this debate during COVID on do you upgrade or replace the campus or not because people are not coming back to the office, but now often so that even if you come back to the office one day a week, you still need that network to work because when you sit there, what are you going to do? You're going to read a conferencing all day with all of your colleagues all over the world. So you have to invest in that network and infrastructure and upgrade no matter what, which is what many enterprises are doing as well.
How about -- I don't know if you guys ever talked about this, but I just want to ask a question anyway. How much of your business comes from, what you would characterize is kind of infrastructure refresh or replacements versus net new footprint build-ups, and obviously that applies probably to the cloud vertical in particular, but just curious how you think about that mix of business for rest of the day?
This is something that's extremely hard to measure for us or for anyone else. So I'll put that out as a caution. Do not try to infer this into some mathematical model that immediately results and this is what the spin is going to be. It varies quite a bit by customer as well. But the way to think about the cloud even and this applies to the enterprise as well and the enterprise I think the math is fairly straightforward that data centers they try to refresh between five to seven years. If there is supply shortage, then become seven years. If there's no shortage, they try to do maybe five, six years, but at least that duration, they're not going to replace prior to that.
Campus environment, people have spreaded their assets much longer. This is one analyst report we've seen where the average life of our campus, which that's deployed somewhere in the world today might be between 10 to 12 years. It's been sitting out there that long.
Cloud, on the other hand, does repress a little bit faster and they need the efficiency too, but the way to think about the cloud is you have DCI. PCI, the data center interconnect and the backbone, the primary job is to send as much traffic as possible over longer distances. Long distances could be 100 kilometer or thousands of kilometers. So there's a newer technology that can get you from 100 to 400 gig. They deploy that quickly, but if they've just deployed 100 gig last year, they're not going to retrofit that immediately with 400 gig as an example. They're going to wait at least three, four years before they even have cycles to go back and revisit that site.
But compute is a bit harder to understand. On average, cloud companies also would like to refresh their upgrades every five years or so. So on a simple map basis, one-fifth of the infrastructure should get upgrade every year, but what happens is whatever is high end of compute being sold today, two years from now will be sold as mid-range compute and five years from now, it will be sold as low end.
A lot of reuse that happens and depending on the skew and the architecture and that without re-users succeeding or not, the models actually vary a lot. If on top of that, you put supply shortages and they haven't managed to work the way this was became wishful, thinking that I would also like to repress when there's such shortage. So as a result, some of these things got pushed out, but it does vary by customer, but that's the overall goal that they're trying to get to. Try to get the roughly every five, six years or so.
So, shifting to the next topic, which is, and I think you alluded to that a little bit in the response here, but you have a tendency to talk probably a lot about 400 gig a little bit. I'm curious of Arista's position at 400 gig, I think you gave from market share metrics at the Analyst Day, but maybe help us appreciate you being able to maybe even take share at 400 gig cycle. Where do you think we're at in the 400 gig cycle, and then I'm definitely going to ask you about 800 gig using 1.6 after that.
Right. So, a lot of exemptions in that question, I think I'm really glad you're asking this because this this perception that there's a 400 gig cycle, what if there is no 400 gig cycle? Customers -- the cloud are deploying 100 gig in high volume. We showed this at the Analyst Day as well that 100 gig actually continues for the next five to seven years. Doesn't really slow down much.
On top of that, you have 400 hundred gig for certain used cases and those used cases today are data center interconnect or backbone, as well as AI. And the big BCI has been going on based on availability of 400 gig products and rear optics things like that. AI is somewhat newer relative to DCI, but still starting to happen.
But customers will continue deploying more efficient 100 gig for a couple of years to come. So 400 gig really gets layered on top. It's not a cycle by itself. It's getting added on top and we've done phenomenally well, I would say in our execution with our key customers, our top cloud customers, some of the tear two cloud as well. And they are extremely happy with us.
This whole notion that someone puts out an announcement that just because they finally made a product that somehow they take away 100% of their share, it's just not true. We talked about the 25 million lines of code. A lot of those lines of code were written based on requirements by the cloud companies. You take some of our competitors a decade to catch up to all of that and the automation and the APIs and the streaming telemetry and so on, but customers do want to be multi-vendor and often that got confused for someone else who is going to take away a lot of share.
We've done very well in forensics so far. I think the market analyst reports have been published up to Q2 or Q3 results of this year and we have the number one market share in 400 gig ports globally in the OEM vendors. There are two cloud companies in the US that put their own white boxes. They continue to do so with their own 400 gig products. So if you exclude that, we are doing significantly better than our competition. But at the same time, but also maintaining a very strong share, the number one position almost 40% plus market share in 100 gig as and on top of that adding 400 hundred gigs.
So I think this is good execution by the entire team at Arista, especially with the cloud vertical and some of the high end high tech.
And you mentioned in a segue off that that answer the question will be, it always seems to come up this soft, the white box competition, right, and you've been fairly candid in the past about how you see it evolving and I think you just mentioned a little bit, the lines of code and how you work very tightly with these cloud customers is an important attribute when we think about that white box risk competitively. Just maybe for the audience shares, your thoughts on white box. How do you see that competitively if at all?
This discussion has been on the table since our IPO. This was the number one risk flagged at our IPO that somehow will lose our business to white boxes. What you have to understand is why do these cloud companies use white boxes in the first place and for every company, the decisions they've made at the time they've made them have been different reasons.
Google did this in 2005. Guess what? There wasn't any competition in the market at that point. They looked at one networking company and asked them, hey, can you provide us a wave last network at the right price? They said, we can't, build it on their own. Amazon looked at this whole space in 2010. In fact, they talked to us at that time, but we were a very tiny start-up. So they didn't really think AWS could run on infrastructure from a start-up. They decide they'll build on their own and they have religion and vertical integration if you didn't know.
So they didn't do like to build. Everything on their own or buy their own planes and ships and build their own switches when they can. I'm good for them, right. They can get the right results if they have the scale to put the investment into that. Come 2013, 2014, when Facebook or Mirror had to make that decision, they had a different viewpoint. They said, you know what, the market is a lot more competitive now.
They talked to us. They said, let's partner and you saw the result of that in the first switch that came out around 2017, 2018, with Tomahawk One and that was the product that was a Tomahawk Three that was a product where essentially the two companies co-developed it together. Said, from build versus buy to build and buy, they been extremely happy the outcome because they are multi-sourced. They get all of their requirements met to their data centers specs.
During the supply chain crisis, they were so thankful that we were there for them to get them the supply we could and the delivery is we could and so on. Look at Microsoft, our biggest customer and they've looked at other cloud companies too and realized, what if a risk is competitive and be able to supply all this gear to them and meet every used case from top of Rack, to Spine, to DCI, to Band, to Edge and so on, then why go to the pain of building something on our own only to not even sort of be as competitive. They be somewhat behind and it's not worthwhile.
So all these companies have made a decision that in today's time, it makes sense to not build on their own, but buy from the industry because the industry is extremely competitive, but the ones who were building on their own, won't easily go back to buying from the industry because they're locked into their own stack with the right software development 10, 15 years, as I mentioned, 25 million lines of code in US. Guess why these cloud companies have millions of lines of code in their own stack as well. Who's going to port all of that work, which is why I think this entire industry remains largely status quo.
There might be a plus minus 5% shift here and there and it's not going to be a massive shift in either direction I think it's a misconception for anyone to think there's a risk, if anything. I mentioned this on one of the earnings calls as well with at least one large cloud company for a few year's cases. Not everything, but for a few years cases is considering going from white boxes to buying from the industry. So if anything, it's actually going the other way, not more towards five boxes.
You just answered the question I'm going to ask because I thought that why not the reverse? So you're seeing at least one hyperscale cloud customer.
Maybe used cases because they have additional functionality that's required that doesn't exist in the internal stack. It will take them too long to build it. At the same time, they will go to the process and announce them converting and actually using products on the outside in cases places they've never used done so before. So they have to change their controller logic. The upstream not bond software that has to adapt to that as well.
We'll see if that happens or not, but certainly I don't see any of these companies saying, you know what, we're done and will only build white boxes.
Yeah. We talked about 400 gig…
I want to add one thing here, but absolutely, this this is -- we had lots of one-on-one's today. This was the number one syllabu today. This has been the number one syllabu for the last ten years. So, on white boxes, for some of our largest cloud customers today, we are working with them on their architectures from 2025 to 2027 and in places where we are deeply entrenched, we are working on how do you cool a 1,000 watt chip still keep it efficient for the customer? How do you get the signal integrity on a standard PCB technology for six, seven inches of traces at 100 gig and 200 gig, which are the next gen speeds. And our customers are amazed by the contribution that our teams are able to bring to the table.
And as a result of that, they have no interest trying to do all of this by themselves and many of you don't see discussions and meetings that are happening that are three, five, seven years out. We are in these meetings daily, which is why we are so confident that these customer bases are not going to go back to white boxes. They actually need us to develop all of this and get there as quickly as possible.
And that's extremely interesting. I've got thirty-five more questions and a little under nine minutes. So we're going to try...
I've thought we'll only talk about white boxes.
So I think this question is going to tie together a little bit. At the Analyst Day, your colleague Andy, one of the cofounders of the company, gave as always a very good presentation overview. Talked about 800 gig and 1.6 TBA, maybe even faster cycles and I'm going to dovetail this within the context of AI fabric networks, right? This idea that as we see more GPUs attached in servers, they're consuming just a massive amount more of bandwidth.
So maybe connect the dots there. AI fabrics, the Arista opportunity and again, kind of help us appreciate how that's being driven by AI -- you know GPSs?
Absolutely. Around 2012, 2014 timeframe, IP storage or Ethernet networks was a very big deal. You ought to do in a loss less way and 40 gig was just coming around to the market, but wasn't enough. The price got saturated very, very quickly with storage traffic. Then came 100 gig and everyone was so relaxed. Finally, there's enough network IO that I'm not congested and dropping traffic all day.
The same thing is happening with AI today. At 100 gig speeds or even 400 gig speeds, the AI will just consume all the bandwidth and you're still congested in dropping and the reason this matters is the way AI works is if you have a 1,000 node cluster, the 1,000 GPUs, you're doing a transformation of a data set and if one of the nodes is still not done because it's waiting some packets to come back, all the other 999 nodes are wearing. Waiting for that transaction to complete.
Facebook Meda published a paper on this and they showed that most of the GPUs for many of the AI benchmarks are waiting for network IO to complete for a third of their cycles. So 33% of GPUs are completely wasted. So if you gave them more bandwidth, they could do the same job in the same amount of time with only 66% of the GPUs or they can finish the entire job in 66% of their time, if you get them all the GPUs, but in any way you look at it, it can be a lot more efficient in a significant cost saving.
So all these companies, the AI groups within these companies are coming to us and every other company saying, can I go faster? That's where the need for 800 gig comes up. Those are the companies sitting down with us and talking about 200 gig, which are not even coming to the market. Now they will come back two, three, four years from, the best case and they want to start designing that in now because they know that as soon as 1.60 comes the market, they can consume it. So immense opportunity.
The AI clusters are already starting to get large and when they get large, you need a nice systematic network that works. You can monitor it. You can provision it. You can automate it. There's nothing better than the IP lease pine designs we've done so far, but now tuned towards AI workload and get the right monitoring and buffering and other mechanisms in there and as a result of that beginning pulled into a lot of these opportunities. I think AI high speed internet, IP will all converge with every generation of technology that comes out now.
And I think at the Analyst Day, you guys talked about that representing $2 billion to $3 billion adjacent market opportunity for the company, arguably in the very early innings of seeing that opportunity materialize. I guess where I get confused a little bit sometimes is, how does what you're talking about, Ethernet side, where does InfiniVAN fit in or is it Ethernet versus InfiniVAN or both coincide in the context of AI fabric network buildup. So what's the delineation there if there is any?
Well, there are certain workloads that are latency sensitive. and HPC environments if you look at the top 500 clusters, many of them use InfiniVAN for that reason. For the many workloads we are seeing in the large public cloud, that are not latency sensitive, but they need a loss left network. They are IO sensitive. You cannot drop the packet.
If you provide them a better Ethernet network, like we have with our AI Spine, which has very deep pocket buffers, provide you a contrast, an average top of rack switch today has about 32 megabytes of packet memory and we're trying to get all the packets to without dropping with this congestion, you buffer them up into 32 megabits.
The AI spine we have like the 7800 has eight gigabytes of packet memory per chip. That's a lot more packet memory than you would imagine, but you need that to have that completely lost this architecture. That's the kind of tradeoff that you're looking at. These products cost a little bit more, but in the end that you can see 1 GPUs, why not. So I think that's why we are headed towards these architectures in a way that nothing else can scale to right.
You can build a 256 node implement cluster, but if someone says, can you build me a 32,000 node cluster and operate it like a cloud and just not have to bring down the whole cluster for maintenance or operations on you need to be back into the leaf spine type of architecture we've done a distributed mechanism essentially to really scale this up.
So it's interesting. We've seen obviously Meta made some fairly public announcements around their AI RSC deployment, a big driver of their CapEx spend. We've seen recently NVIDIA announce the collaboration, multi-year collaboration talking about multi thousands of GPUs deployed in the AI projects. When we see those kind of things, do we think, hey, those are net-new adjacent network build-out said that obviously as part of that $2 billion to $3 billion TAM opportunity that's starting to materialize for Arista.
Absolutely. I think as you see more AI move to the cloud, that's a great opportunity for Arista. That's in a nutshell, how you can measure it. The specifics are different business cluster, Meta has different types of architectures, different types of architectures and so on.
Okay. In the two minutes we've got left, I want to ask you about software strategy, right. At the heart of it at the end of the day, Arista was founded on the software differentiation as far as the strategy and you've obviously talked about expanding in adjacencies around that core software. How does the company think about monetizing software? Is there an evolutionary path, where we start to think about Arista being a software-centric line-item subscription line-item, just curious as to how you guys thinking about that internally.
Yeah. So the software line-item is actually quite significant already, but the way to think of software subscription or a SaaS model is that you're delivering value where the customer appreciates the subscription model, they can turn it on, turn it off, number of seats, number of features and so on at any given time and they're getting constant value every month, every quarter, every period with updates, then a subscription model is justified.
In places like CloudVision, CloudVision is pretty much offered only as a subscription product to our customers. And it can run on-prem or in the cloud and when it runs on the cloud, you have significant value and how we manage CloudVision for our customers, so that's what they manage and run and automate their infrastructure. But it's all offer as a SaaS model. We have our licenses for routing and so on that are a line item they get added on, if you want to turn on more functionality on the switches, you're paying more for the product as well.
What we don't like to do is, do an unnecessary conversion of hardware to subscription to show it like subscription. That's essentially a leasing model. There's no real value to the customer other than telling them, you need to pay more if you keep on using the product longer. That's not what they like, because they think they are buying something else perpetual.
So I think that's somewhere in-between. They're not trying to do an artificial shift just to please Wall Street. I think it has to be organic in your business and then the results will show. You see this in CloudVision, you see this in Awake part of our business, you see this in our DANZ Monitoring Fabric. These are all subscription offerings.
In the 45 seconds we do have left, I mean is there anything that I didn't ask you if there's any comments you might have on supply-chain dynamics or anything else that maybe we should have asked you or takeaway from this discussion?
Look, supply will recover. We've said this in earnings calls as well probably towards the end of '23 is our best case guess, but let's see what happens to the whole world. Best opportunity in front of us is still growth. Cloud has long-term systematic growth. This is a sector that has ups and downs.
It comes with the segment, we can't ignore it. But at the same time, when I ask the cloud customers what are their plan for the next 10 years, 15 years, 20 years, they just see growth. Enterprise, datacenter, we are still underpenetrated. Campus, we're just starting. It's tremendous growth opportunity and I get as excited as I was at the time of the IPO that we still see that growth and great opportunity to keep on taking share.
Perfect. Anshul, we're right on time. Thank you so much for joining us.
Thank you, Aaron.
Remote work is not for every business and it may not be everyone’s cup of tea. When my co-founder and I decided to build a distributed engineering team for our startup, numerous questions raced through our minds: Will they be productive? How will decisions be made? How do we keep the culture alive?
Today, we manage a remote team of about a dozen engineers, and we’ve learned quite a bit along the way.
Here are some tips we hope you find effective. These are probably applicable to earlier-stage startups and less so for larger organizations.
In an office setting, employees have ample opportunities to interact with colleagues, and these conversations organically create a sense of authenticity. But in a remote work setting, there is no such privilege.
Some of our founder friends have used services to monitor or micromanage their employees during work hours, but we feel this is unproductive and antithetical to building a positive culture.
The introduction of pair programming, an agile software development technique where two engineers simultaneously work on the same issue, fosters collaboration and creates opportunities for developers to have conversations as they would in an office pantry. We try to pair two programmers for a sustained period of time (about 10 weeks) before considering a rotation or switch.
Some may argue that pair programming is a waste of time on the basis that if each individual can produce X output, then it makes sense to produce twice that output by having each of them work on separate problems.
We find this view limiting. Firstly, pair programming results in higher quality, since two brains are generally better than one. When engineering systems are incredibly complex, having a thoughtful “sanity checker” is almost always a good idea, as this prevents mediocre decisions and helps thwart downstream problems, which can be time-consuming to resolve in the future. In my experience, it also leads to faster problem resolutions. To elucidate this point, if problems can be solved in half the time, then in the same time frame, the output of two programmers working as a pair will still be 2x.
By Yongxing Deng, cofounder and CTO of Aloft, a real estate technology startup based in Seattle, WA.
As an engineer, turning an idea into reality can be one of the most fulfilling feelings. For some of us, providing physical products or software solutions might feel great, but we might want to pursue entrepreneurship. How does one transition from building a product to building a company?
When building within the confines of a lab or a computer, it is often tempting to plan everything out before getting started. Unfortunately, that is often not possible when building a company; in business, there are too many variables that are either out of your control or unknowable.
Founding a business is the process of derisking by eliminating uncertainty, but you can only do so if you are able to pursue ideas while holding space for the unknowns. Don’t let the unknowns stop you in your tracks; either work to remove them or work around them.
In school and in your day-to-day job as an engineer, you might be used to being presented with a concrete problem, so it might be your instinct to jump to finding a solution. However, not all problems are created equal, and pursuing the appropriate problem that matches your skill set is often as important for your overall success as your ability to solve a problem.
Take your time in finding a business problem you’re genuinely passionate about. Do you have any insight into or experience with the problem that makes you uniquely qualified to solve it? Can you see yourself working on this problem for years, if not decades to come?
One of the core company-building skills you as an engineer might not have as much practice in is sales. In addition to selling to customers, you will have to sell your idea to investors to raise capital. You’ll also have to sell the vision of the company to your potential teammates; for many founders, recruiting is one of the most time-consuming parts of their job.
The good news: Sales is a learnable skill. If you have never encountered sales, a good framework to get started on is BANT. Using this framework (or something like it) can help you qualify early and win deals faster. Active listening can also help you understand your speaker’s needs and wants more accurately.
One of the hardest transitions in your journey from an engineer to a founder is becoming a manager. Inevitably, your team will encounter a challenging problem, and you may feel like you know exactly how to solve it. Fight the instinct that you might have to solve that problem yourself. If you want to build a strong and sustainable team, you should provide them opportunities to learn and grow—and don’t forget to provide them credit when they do a great job.
Finally, don’t forget your strengths as an engineer. For example, analytical skills are highly valuable in the business world. Income statements, balance sheets, capitalization tables... while these concepts can be intimidating at first, once you understand them, you might often find them quite intuitive and helpful in understanding your business. A crash course or two on these subjects can reap plenty of benefits.
Another potential strength you have is risk assessment. Seeing a potential risk down the line allows you and the company to better prepare for what’s to come. In those cases, communicating these risks artfully to the team can often help your company navigate tougher terrain.
Some of the world’s most valuable companies were started by engineers: Microsoft, Alphabet, Meta, etc. If you’re thinking about amplifying your impact through entrepreneurship, just remember: You already have it in you.
My dad always wanted me to become an engineer because he wanted to see me succeed. The industry has low unemployment rates and he thought I should always be able to find work.
I listened to his advice and decided to become a computer science major in college and later become a software engineer.
I graduated about four years ago and since then I've worked as a full stack developer, a front-end web developer, a front-end engineer at JP Morgan Chase & Co, and a developer evangelist at Twilio — which I say is a cross between developing, marketing, and product management.
I also create content for social media. On YouTube, I have 145,000 followers and I have 63,100 TikTok followers.
I talk more about what I like about software engineering more than about what I don't like. Sure, there are a lot of perks to being a software engineer such as six-figure salaries and free food, but some things are less than ideal.
Here are the top 5 things I don't like about being a software engineer.
Programmers are often building things that have never been made before and there aren't references on how to do it. And it's incredibly exhausting work.
Plus, the more you move up the ladder as a programmer, the more expectations there are on top of your programming duties and it can feel like a never-ending growing list of things to do. It also doesn't help that most teams I've seen are undermanned.
I know I've burned out when I stop feeling fulfilled or excited by my work.
I think the reason many software engineers burn out is because there's pressure to code even when we're home.
Some programmers will code at home to try and solve problems they don't know how to fix yet. And if you're not doing this, you may fall behind. Others work on projects they're passionate about do they're coding at night for fun.
Also, technology moves so fast so you need to keep learning to stay up to date. There is this added pressure to constantly read blog posts, engage with open-source coding, and work on personal projects even when you're off the clock.
I can't think of any other industry where you treat your job like a hobby as well. I like to do other things, like play basketball, and it's hard to find room for things outside of coding — there is an expectation that you need to eat, sleep, and breathe code all the time.
There are so many videos about people making upwards of $120,000 right out of college or $200,000 in their twenties in this industry. That really pushes people to try and make as much money as they can and to jump from one job to another seeking more success.
It's difficult to feel satisfied where you are professionally since there may be something better or higher paying somewhere else.
When I'm preparing for technical interviews, I don't have time for anything else. I'm basically a student after 5 p.m. on top of my regular job. I also don't think technical interviews accurately show my, or anyone's, abilities.
It was around the last week of December when I was preparing for my technical interviews. And on New Year's Eve, I could only celebrate with my wife and her family for a few minutes before returning to studying for a technical interview that was a week away.
Plus, it's terrible to be rejected from a job opportunity on the 4th, 5th, or 6th round of interviews because you've already dedicated so much time just to be considered.
Programming is a competitive field but there are way more jobs than there are programmers. But there's a constant feeling that you may lose your job if you aren't the very best developer on your team. A lot of programmers end up with imposter syndrome and constantly compare themselves to their peers — which is really unhealthy.
If you are a software engineer, know that you bring value to the tech world and your company. And if you're feeling burnout, unfulfilled, or worried that you're not getting as much done as your peers, understand that programming ebbs and flows. It's extremely hard to even become a software engineer so think about how far you've come already.
The world needs problem solvers like software engineers and the opportunities for people in this industry are pretty much endless given the digital transformation the world is going through. I can't think of any other field that can compare as far as job security and the amount of high paying positions you can get.
On November 4, just hours after Elon Musk fired half of the 7,500 employees previously working at Twitter, some people began to see small signs that something was wrong with everyone’s favorite hellsite. And they saw it through retweets.
Twitter introduced retweets in 2009, turning an organic thing people were already doing—pasting someone else’s username and tweet, preceded by the letters RT—into a software function. In the years since, the retweet and its distant cousin the quote tweet (which launched in April 2015) have become two of the most common mechanics on Twitter.
But on Friday, a few users who pressed the retweet button saw the years roll back to 2009. Manual retweets, as they were called, were back.
The return of the manual retweet wasn’t Elon Musk’s latest attempt to appease users. Instead, it was the first public crack in the edifice of Twitter’s code base—a blip on the seismometer that warns of a bigger earthquake to come.
A massive tech platform like Twitter is built upon very many interdependent parts. “The larger catastrophic failures are a little more titillating, but the biggest risk is the smaller things starting to degrade,” says Ben Krueger, a site reliability engineer who has more than two decades of experience in the tech industry. “These are very big, very complicated systems.” Krueger says one 2017 presentation from Twitter staff includes a statistic suggesting that more than half the back-end infrastructure was dedicated to storing data.
While many of Musk’s detractors may hope the platform goes through the equivalent of thermonuclear destruction, the collapse of something like Twitter happens gradually. For those who know, gradual breakdowns are a sign of concern that a larger crash could be imminent. And that’s what’s happening now.
Whether it’s manual RTs appearing for a moment before retweets slowly morph into their standard form, ghostly follower counts that race ahead of the number of people actually following you, or replies that simply refuse to load, small bugs are appearing at Twitter’s periphery. Even Twitter’s rules, which Musk linked to on November 7, went offline temporarily under the load of millions of eyeballs. In short, it’s becoming unreliable.
“Sometimes you’ll get notifications that are a little off,” says one engineer currently working at Twitter, who’s concerned about the way the platform is reacting after vast swathes of his colleagues who were previously employed to keep the site running smoothly were fired. (That last sentence is why the engineer has been granted anonymity to talk for this story.) After struggling with downtime during its “Fail Whale” days, Twitter eventually became lauded for its team of site reliability engineers, or SREs. Yet this team has been decimated in the aftermath of Musk’s takeover. “It’s small things, at the moment, but they do really add up as far as the perception of stability,” says the engineer.
The small suggestions of something wrong will amplify and multiply as time goes on, he predicts—in part because the skeleton staff remaining to handle these issues will quickly burn out. “Round-the-clock is detrimental to quality, and we’re already kind of seeing this,” he says.
Twitter’s remaining engineers have largely been tasked with keeping the site stable over the last few days, since the new CEO decided to get rid of a significant chunk of the staff maintaining its code base. As the company tries to return to some semblance of normalcy, more of their time will be spent addressing Musk’s (often taxing) whims for new products and features, rather than keeping what’s already there running.
This is particularly problematic, says Krueger, for a site like Twitter, which can have unforeseen spikes in user traffic and interest. Krueger contrasts Twitter with online retail sites, where companies can prepare for big traffic events like Black Friday with some predictability. “When it comes to Twitter, they have the possibility of having a Black Friday on any given day at any time of the day,” he says. “At any given day, some news event can happen that can have significant impact on the conversation.” Responding to that is harder to do when you lay off up to 80% of your SREs—a figure Krueger says has been bandied about within the industry but which MIT Technology Review has been unable to confirm. The Twitter engineer agreed that the percentage sounded “plausible.”
That engineer doesn’t see a route out of the issue—other than reversing the layoffs (which the company has reportedly already attempted to roll back somewhat). “If we’re going to be pushing at a breakneck pace, then things will break,” he says. “There’s no way around that. We’re accumulating technical debt much faster than before—almost as fast as we’re accumulating financial debt.”
He presents a dystopian future where issues pile up as the backlog of maintenance tasks and fixes grows longer and longer. “Things will be broken. Things will be broken more often. Things will be broken for longer periods of time. Things will be broken in more severe ways,” he says. “Everything will compound until, eventually, it’s not usable.”
Twitter’s collapse into an unusable wreck is some time off, the engineer says, but the telltale signs of process rot are already there. It starts with the small things: “Bugs in whatever part of whatever client they’re using; whatever service in the back end they’re trying to use. They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just provide up.”
Krueger says that Twitter won’t blink out of life, but we’ll start to see a greater number of tweets not loading, and accounts coming into and out of existence seemingly at a whim. “I would expect anything that’s writing data on the back end to possibly have slowness, timeouts, and a lot more subtle types of failure conditions,” he says. “But they’re often more insidious. And they also generally take a lot more effort to track down and resolve. If you don’t have enough engineers, that’s going to be a significant problem.”
The juddering manual retweets and faltering follower counts are indications that this is already happening. Twitter engineers have designed fail-safes that the platform can fall back on so that the functionality doesn’t go totally offline but cut-down versions are provided instead. That’s what we’re seeing, says Krueger.
Alongside the minor malfunctions, the Twitter engineer believes that there’ll be significant outages on the horizon, thanks in part to Musk’s drive to reduce Twitter’s cloud computing server load in an attempt to claw back up to $3 million a day in infrastructure costs. Reuters reports that this project, which came from Musk’s war room, is called the “Deep Cuts Plan.” One of Reuters’s sources called the idea “delusional,” while Alan Woodward, a cybersecurity professor at the University of Surrey, says that “unless they’ve massively overengineered the current system, the risk of poorer capacity and availability seems a logical conclusion.”
Meanwhile, when things do go kaput, there’s no longer the institutional knowledge to quickly fix issues as they arise. “A lot of the people I saw who were leaving after Friday have been there nine, 10, 11 years, which is just ridiculous for a tech company,” says the Twitter engineer. As those individuals walked out of Twitter offices, decades of knowledge about how its systems worked disappeared with them. (Those within Twitter, and those watching from the sidelines, have previously argued that Twitter’s knowledge base is overly concentrated in the minds of a handful of programmers, some of whom have been fired.)
Unfortunately, teams stripped back to their bare bones (according to those remaining at Twitter) include the tech writers’ team. “We had good documentation because of [that team],” says the engineer. No longer. When things go wrong, it’ll be harder to find out what has happened.
Getting answers will be harder externally as well. The communications team has been cut down from between 80 and 100 to just two people, according to one former team member who MIT Technology Review spoke to. “There’s too much for them to do, and they don’t speak enough languages to deal with the press as they need to,” says the engineer.
When MIT Technology Review reached out to Twitter for this story, the email went unanswered.
Musk’s recent criticism of Mastodon, the open-source alternative to Twitter that has piled on users in the days since the entrepreneur took control of the platform, invites the suggestion that those in glass houses shouldn’t throw stones. The Twitter CEO tweeted, then quickly deleted, a post telling users, “If you don’t like Twitter anymore, there is awesome site [sic] called Masterbatedone [sic].” Accompanying the words was a physical picture of his laptop screen open on Paul Krugman’s Mastodon profile, showing the economics columnist trying multiple times to post. Despite Musk’s attempt to highlight Mastodon’s unreliability, its success has been remarkable: nearly half a million people have signed up since Musk took over Twitter.
It’s happening at the same time that the first cracks in Twitter’s edifice are starting to show. It’s just the beginning, expects Krueger. “I would expect to start seeing significant public-facing problems with the technology within six months,” he says. “And I feel like that’s a generous estimate.”
First insights into engineering crystal growth by atomically precise metal nanoclusters have been achieved in a study performed by researchers in Singapore, Saudi Arabia and Finland. The work was published in Nature Chemistry on November 10, 2022.
Ordinary solid matter consists of atoms organized in a crystal lattice. The chemical character of the atoms and lattice symmetry define the properties of the matter, for instance, whether it is a metal, a semiconductor or and electric insulator. The lattice symmetry may be changed by ambient conditions such as temperature or high pressure, which can induce structural transitions and transform even an electric insulator to an electric conductor, that is, a metal.
Larger identical entities such as nanoparticles or atomically precise metal nanoclusters can also organize into a crystal lattice, to form so called meta-materials. However, information on how to engineer the growth of such materials from their building blocks has been scarce since the crystal growth is a typical self-assembling process.
Now, first insights into engineering crystal growth by atomically precise metal nanoclusters have been achieved in a study performed by researchers in Singapore, Saudi Arabia and Finland. They synthesized metal clusters consisting of only 25 gold atoms, one nanometer in diameter. These clusters are soluble in water due to the ligand molecules that protect the gold. This cluster material is known to self-assemble into well-defined close packed single crystals when the water solvent is evaporated. However, the researcher found a novel concept to regulate the crystal growth by adding tetra-alkyl-ammonium molecular ions in the solvent. These ions affect the surface chemistry of the gold clusters, and their size and concentration were observed to have an impact on the size, shape, and morphology of the formed crystals. Remarkably, high-resolution electron microscopy images of some of the crystals revealed that they consist of polymeric chains of clusters with four-gold-atom interparticle links. The demonstrated surface chemistry opens now new ways to engineer metal cluster -based meta-materials for investigations of their electronic and optical properties.
The cluster materials were synthesized in the National University of Singapore, the electron microscopy imaging was done at the King Abdullah University of Science and Technology in Saud Arabia, and the theoretical modelling was done at the University of Jyvaskyla, Finland.