My heyday in programming was about five years ago, and I’ve really let my skills fade. I started finding myself making excuses for my lack of ability. I’d tackle harder ways to work around problems just so I wouldn’t have to code. Worst of all, I’d find myself shelving projects because I no longer enjoyed coding enough to do that portion. So I decided to put in the time and get back up to speed.
Normally, I’d get back into programming out of necessity. I’d go on a coding binge, read a lot of documentation, and cut and paste a lot of code. It works, but I’d end up with a really mixed understanding of what I did to get the working code. This time I wanted to structure my learning so I’d end up with a more, well, structured understanding.
However, there’s a problem. Programming books are universally boring. I own a really big pile of them, and that’s after I gave a bunch away. It’s not really the fault of the writer; it’s an awkward subject to teach. It usually starts off by torturing the reader with a chapter or two of painfully basic concepts with just enough arcana sprinkled in to massage a migraine into existence. Typically they also like to mention that the arcana will be demystified in another chapter. The next step is to make you play typist and transcribe a big block of code with new and interesting bits into an editor and run it. Presumably, the act of typing along leaves the reader with such a burning curiosity that the next seventeen pages of dry monologue about the thirteen lines of code are transformed into riveting prose within the reader’s mind. Maybe a structured understanding just isn’t worth it.
I wanted to find a new way to study programming. One where I could interact with the example code as I typed it. I wanted to end up with a full understanding before I pressed that run button for the first time, not after.
When I first read about literate programming, my very first instinct said: “nope, not doing that.” Donald Knuth, who is no small name in computing, proposes a new way of doing things in his Literate Programming. Rather than writing the code in the order the compiler likes to see it, write the code in the order you’d like to think about it along with a constant narrative about your thoughts while you’re developing it. The method by which he’d like people to achieve this feat is with the extensive use of macros. So, for example, a literate program would start with a section like this:
Herein lies my totally rad game in which a bird flaps to avoids pipes.The code is structured in this manner: <<*>>= <<included files> <<the objects and characters in my game>> <<the main loop>> @ This is the main loop, It contains the logic and state machine, but it also has a loop to update all the graphics within each object. <<the main loop>>+= <<the game logic> <<the graphics> @
In this example, you’d later write things like this.
In this next bit I am going to define the bird who flaps. To do this I will have to create an object. The object will contain the position of my bird, the state of its flapping wings, and the physics by which its perpetual descent is governed. <<the objects within my game>>+= <<flapping bird>>= class Flapping_Bird: <<initialize the bird>> @ The bird is initialized in the middle of the left side of the screen. As soon as the user grazes any key for the first time, the bird will begin to suffer. To enable this I will need to know what size window the bird inhabits, and with this information I will determine the scale of the bird and its initial location. <<initialize the bird>>+= __init__(self,screenx,screeny): etc..
Okay, this will take a bit of deciphering, and I probably didn’t nail the syntax. In simple terms, anything between a <<>> and a @ is real code. If you see <<>>+= it means add this bit of code to the previous definition (which was defined by <<>>=. Well, Wikipedia explained it better.
That introduction aside, the concept that really stuck with me was the idea of writing down the thoughts in your head along with your code.
Ordinarily, I’d say this would be bad to put into practice. Your coworkers won’t really appreciate paragraphs of exposé between every line. The whole point of learning a programming language is to become fluent in it. The goal is, in my mind at least, to get to the point that you’ll be able to read code and logic flow like you’d read a book. (Along with a helpful spattering of comments to orient you.) Of course, I’m certainly not Donald Knuth, and I’m not even a very good programmer, so don’t take that as gospel.
Which gets me to the core of the article. It’s something I started doing soon after I read about literate programming, and it’s tripled how fast I learn programming. It’s also made it more fun.
I’ve been working through a great beginner book on pygame, Making Games with Python and Pygame by Al Weigart which is free online. Below is the very first example in the book, and it is presented in the standard way. The reader is expected to type the code into their text editor of choice and run the code. After the code performs as expected, the reader is then expected to go through a line by line dissection in order to figure out what each line did. This works, but I think it’s typically better at teaching someone to transcribe code, than to actually code code.
import pygame, sys from pygame.locals import * pygame.init() DISPLAYSURF = pygame.display.set_mode((400, 300)) pygame.display.set_caption('Hello World!') while True: # main game loop for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() pygame.display.update()
With my revelations from literate programming my code started to look like this:
#alright, we remember this. The compiler or interpreter or whatever gets mad if we don't import the functions and files we need first import pygame, sys #pygame is the game library I'm using, sys is full of useful system calls. I think I'm importing this one for the sys.exit() function later. I skipped ahead and read that this is similar to breaking from the loop or pressing ctrl-D. Maybe this is a cleaner way to exit a program and a bit more universal? I don't know but I don't want to get bogged down at this point, so I'll remember the question for later. from pygame.locals import * #I've already imported pygame, but I'd like to get the stuff inside pygame.locals out. The book mentions that this is mostly useful constants like the QUIT I'll be using later.
I would attempt to explain as much as I could about what each line was doing. It’s a bit tedious, but it helped me note the bits I remembered and figure out which parts I only thought I recognized.
pygame.init() #I should initialize pygame before I start using it. No complaints with that logic here. Apparently it will throw errors if I don't. ### Yep, tested it by moving pygame.init() down a few lines. It definitely throws errors.
One thing I realized was pretty fun, was to keep a log of my experiments in my code. Since python doesn’t really care, I’d just add more hashes to show when I’d gone back to try something out. Anyway, the rest is at the end of the article if you want to read the full content of my inane and perhaps incorrect comments.
While I’m still not sold on literate programming, and perhaps, am not at a level to be sold on it anyway. I definitely benefited from some of its core ideas.
Programming is one of those things that’s really hard to take notes on. I can’t imagine anything more useless than writing, “print(‘the thing you want to print’) – prints something to the console,” on a bit of lined notebook paper. What good will that do?
Programming is about experience, developing good thought patterns, and clearly understanding what the magic is doing. If you’ve been having trouble making programming stick for you, deliver this a try. Maybe it will help. Looking forward to thoughts in the comments. Any tricks you use for learning programming?
DISPLAYSURF = pygame.display.set_mode((400, 300)) #gosh I hate all caps, but I'm gonna trust that the book writer knows what he's up to. This is is the main "surface" on which everything will be drawn and defines the size of the game's window. Not entirely sure what a surface is yet in the context of pygame. ###played with it a bit, changed the window size, etc. It doesn't like it if the (x,y) of the screen isn't in a tuple pygame.display.set_caption('Butts') #change the title bar. I'm an a dolt Noting that this is all in the display bit of pygame. while True: # main game loop for event in pygame.event.get(): #this bit gets all the events that happened and cycles through them one by one until its done. I think each event might be an object, but I'm not sure. if event.type == QUIT: #The book mentioned that the event types are just numbers and I imported QUIT from pygame.locals earlier with the from statement. I could have left the from statement out and also gotten this with pygame.locals.QUIT ###yep, just for the heck of it I tested that and it worked. pygame.quit() #quit pygame first. This seems important. sys.exit() #quit out of the program second. pygame.display.update() #just going out on a limb here, but my guess is that this function updates that DISPLAYSURF I made earlier.
Sachin Dev Duggal is a serial entrepreneur who created Builder.ai to make building software as easy as ordering pizza.
You enter the entrepreneurial world with an idea, passion and, hopefully, strong business savvy to boot. In decades past, this was typically enough but not anymore. Whether you rent a storefront or operate fully online, you’ll be in need of the software development know-how required to thrive in the digital realm. From app creation to website building, entrepreneurs face several challenges as they attempt to launch and maintain a digital presence.
Lacking the necessary skills isn’t a death sentence for your small business though. After all, not too many entrepreneurs have degrees in computer science or software engineering. All you need is to be able to identify any technological shortcomings and take the necessary steps to build around them.
Talented entrepreneurs know how to market themselves and reach consumers online and through social media channels but providing a smooth and rewarding digital experience for customers is an entirely different beast. This struggle is most apparent when small business owners try to translate their operations virtually and presents a dilemma when it comes to finding a solution. It can be very time-consuming for you to learn how to develop your own applications, and it’s often a major distraction from other important matters in your day-to-day schedule of running a business.
So, the natural solution would be to outsource the job, right? If you somehow already have deep pockets, then bringing on a software developer is an easy fix. But because many small businesses are operating on thin margins, hiring a developer can be too significant of an investment. The idea of paying a market-rate, six-figure salary for someone to build an app from scratch (which can take a long time to accomplish) is just not feasible for the average mom-and-pop shop and the countless marketplace disruptors with a million-dollar idea and only a few thousand bucks in the bank.
This is also assuming that a small business would be able to recruit a developer when the industry is expecting up to four million openings by 2025, according to a recent IDC report. The lack of developers only drives the cost up further, as small businesses have to pay an even higher salary to compete with the corporations and billion-dollar startups vying for those same recruits.
The troubles of creating a custom app don’t stop once it’s built. For anyone who’s able to code or commission an app for their business, the next step is maintenance. It’s a pressure that never goes away but must be addressed at all times if you want to keep customer satisfaction and retention high.
Debugging whenever a user reports a glitch or updating an app when new features are demanded is the kind of work that can quickly drain a small business of its money and a small business owner of their time. Whether you try to keep up with it yourself or have to outsource these tasks, the unpredictability of app maintenance is an unnecessary stress in an environment as cutthroat as the small business ecosystem.
Yet, other options exist that don’t pigeonhole small business owners into spending all their time or money. Low-code and no-code platforms provide the ability to create apps and websites without (as the name suggests) writing endless lines of code. Sound like it’s exactly what you need? It’s not so simple.
Countless startups are throwing their hats in the ring, and these kinds of offerings have been touted as a be-all-end-all solution for small business owners with no prior tech or coding experience. Yes, low-code and no-code solutions do remove the tedium of programming. But what they still require is a fundamental knowledge of coding languages and app development. This kind of service surely has its place, particularly as an excellent tool for real developers, project managers and other tech-adjacent professions that understand software development and are in need of a streamlined workload.
It’s much more difficult to reap the benefits of this kind of solution when the underlying problem for small business owners remains.
As I mentioned, though, low-code and no-code platforms aren’t the last stop on the train to simplifying app development. recent developments in automation, artificial intelligence and machine learning mean unique and customizable apps can be created for small businesses without the baggage that comes with learning code or hiring a developer. For example, an AI-powered platform that collects data on existing app structures, features and designs could put together an app based on your vision and specifications, like an assembly line machine.
This kind of technology is still relatively in its infancy, but the progression of these automated development programs will likely be able to match the capabilities of human developers in time. Businesses have gotten around this by using a combination of AI and humans, which offers the best of both worlds. The benefit is that this tech often requires less time and a relatively low investment.
The only barrier still holding you back from fully digitizing your business is not being aware of emerging technologies. Small business owners can now create apps, often without learning to code or hiring someone to do it for them. Many still struggle to find the perfect solution, but you could become an early adopter of the AI-powered, automated software development platforms that could help small businesses grow in the upcoming years.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Learning a new language is no easy task, and for programming languages, it’s no simpler.
There are many reasons people want to learn to code, with some doing it ultimately to start a new career and others because they enjoy it.
The U.S. Bureau of Labor Statistics estimates that the demand for software developers will increase by 24% from 2016 to 2026. With the rise of technology year after year, the demand for coding isn’t going anywhere. Ultimately, your coding knowledge could land you a job as a developer if that’s the route you’re interested in.
With thousands of resources, boot camps, courses and online tutorials, people can now learn to code on their own and at a comfortable pace. Whether you're new to coding or an experienced programmer wanting to learn a new language, here are three tips to help you get started.
1. Take Advantage Of Online Resources
The internet gives you an endless wealth of information at your fingertips, and you should take full advantage of it. Learning a new program won’t happen overnight, but you can speed along the process with the right tools.
Based on a study conducted by Stanford University, experienced programmers rely primarily on four things when searching for information to learn a new coding language:
What you need to do is find the right resources and tools to start.
The more you practice writing code, the quicker you’ll succeed at learning it fluently.
2. Use Second-Language Acquisition
Learning any new program is no easy task. If you want to learn a programming language faster, you have to treat it the same way you’d treat learning a spoken language. Start by using second-language acquisition (SLA).
SLA is the process and method of learning a second language as well as the scientific disciplines that come with it.
Embry-Riddle Aeronautical University conducted a study about the effect of integrating SLA theories into learning a new programming language. The results showed that when students learned a new program with a cognitive framework, they were able to learn quicker and more effectively. In a cognitive framework, cognitions come before behavior and enhance a person’s perception, ability to process information, thinking patterns, problem-solving skills and more.
To succeed in learning a new program, you need to have a clear mental model. The study describes this as “how someone explains their thought process and how it operates in the real world through our experiences in life.” If you don’t possess a clear mental model of your programs and how they operate as systems, you may still be able to understand certain elements, but it’ll be difficult for you to grasp them in their entirety.
3. Don’t Cram Information
According to a research study performed by UCLA, cramming information is associated with more learning problems and less sleep. Your ability to retain information while cramming decreases, and your brain only remembers the beginning and end of your study sessions.
To overcome the urge to cram while learning a new programming language, set up a study schedule, and stick to it. It’s more beneficial to study in 20- or 30-minute blocks rather than hours at a time so you don’t experience fatigue, lethargy and boredom. It’s easier to stay motivated when you grant yourself breaks in between study periods for better focus and frame of mind.
If you’re trying to eventually learn several coding languages, which is very possible with enough time and effort, take one language at a time.
Over To You
If you’re thinking about learning a new programming language, these tips will help you do it faster. There’s no magic solution to suddenly understanding a new program, but patience and persistence will help you get there sooner. By applying methods of learning a second spoken language to learning a programming language, you’ll be able to grasp the material faster.
There’s also no better way to learn than to practice in real time. Take time every day to develop your own code, and test it for errors so you grasp the material quicker. Soon enough, you’ll be able to say you understand a new programming language.
[Sergey Lyubka] put together this epic guide for bare-metal microcontroller programming. While the general concepts should be applicable to most any microcontroller, [Sergey]s examples specifically relate to the Nucleo-F429ZI development board featuring the ARM-based STM32F429 microcontroller.
In the realm of computer systems, bare-metal programming most often refers to programming the processor without an intervening operating system. This generally applies to programming BIOS, hardware drivers, communication drivers, elements of the operating system, and so forth. Even in the world of embedded programming, were things are generally quite low-level (close to the metal), we’ve grown accustomed to a good amount of hardware abstraction. For example, we often start projects already standing on the shoulders of various libraries, boot loaders, and integrated development tools.
When we forego these abstractions and program directly on the microprocessor or microcontroller, we’re working on the bare metal. [Sergey] aptly defines this as programming the microcontroller “using just a compiler and a datasheet, nothing else.” His guide starts at the very foundation by examining the processor’s memory map and registers including locations for memory mapped I/O pins and other peripherals.
The guide walks us through writing up a minimal firmware program from boot vector to blinking an LED connected to an I/O pin. The demonstration continues with setup and use of necessary tools such as the compiler, linker, and flasher. We move on to increasingly advanced courses like timers, interrupts, UART output, debuggers, and even configuring an embedded web server to expose a complete device dashboard.
While initially more time consuming, working close to the metal provides a good deal of additional insight into, and control over, hardware operations. For even more on the subject, you may like our STM32 Bootcamp series on bare-metal STM32 programming.
C++ has overtaken Java to be the third most popular language in the Tiobe programming language index.
It's the first time C++ has over taken Java in the Tiobe index and it's the first time since 2001 that Java hasn't been in the top three, according to Paul Jansen, CEO of Tiobe Software, a Dutch software quality testing firm.
"The rising popularity of C++ goes at the expense of Java. C++ surpassed Java for the first time in the history of the TIOBE index, which means that Java is at position 4 now," Jansen notes. "This is the first time that Java is not part of the top 3 since the beginning of the TIOBE index in 2001."
Also: The most popular programming languages and where to learn them
He also notes Kotlin and Julia are getting closer to joining the top 20 list. Java-compatible Kotlin is backed by Google for Android app development while Julia, debuted by MIT researchers in 2012, is popular among some for data science.
Analyst RedMonk picked Julia as a language to watch in 2018 as a possible Python rival. It's not in RedMonk's latest June 2022 top 20 list, but also isn't far from it.
In a year-on-year comparison in Tiobe's index, the languages now in the top 20 that made significant gains over the period are: Rust (up from 27 to 20), Objective-C (up from 29 to 19), science-specialized MATLAB (20 to 14), and Google's Go language (up from 19 to 12).
Also: Tech jobs are changing. Here are the real skills you'll need to get promoted
Apple promotes Swift over Objective-C for app development on its platforms but, in Tiobe's index, Swift's ranking dropped from 10th last December to 15th today. Swift and Objective-C are neck and neck respectively, according to RedMonk, at 11th and 12th spot. Stack Overflow places Swift in 19th position in its list of most commonly used programming languages, ahead of Objective-C in 27th.
Lists of 'top' programming languages don't tell you everything about individual coding platforms, and can vary in their focus and how they are put together. Tiobe, for example, uses certain programming-related queries on popular search engines to calculate its ratings, but also bases ratings on the number of skilled engineers in the world, courses and third-party vendors.
Programming is possible on nearly any monitor, but most programmers prefer a big, pixel-dense, attractive screen that can render tiny code with clarity and display numerous windows at once. Prolific multi-taskers, many programmers also go all-in on multiple displays and use two or three monitors at once.
This guide will help you find a great monitor that can handle all the above—and at a reasonable price.
For even more monitor recommendations, check out our roundup of the best monitors across all categories.
The Asus ProArt PG348CGV is an excellent monitor for programming—and many other tasks.
This is a 34-inch ultrawide monitor with a resolution of 3440×1440, which provides plenty of display space and pixel density for viewing multiple windows or large amounts of code. It also has a USB-C port with DisplayPort Alternate Mode and 90 watts of Power Delivery. That’s great for easily docking a USB-C-compatible laptop.
Though ideal for programming, the ProArt PG348CGV excels in any task thrown at it. It has accurate color and a wide color gamut, so it’s great for photo, video, and graphics editing. The monitor also has a 120Hz refresh rate and supports AMD FreeSync Premium Pro, which makes it a solid choice for gaming.
Its price seals the deal. Available for $749.99, the ProArt PG348CGV is less expensive than similar competitors. In fact, it overdelivers compared to most alternatives: Many ultrawide monitors offer similar image quality, a high refresh rate, or USB-C, but very few offer all three.
If you want a standard widescreen monitor for programming, or prefer the pixel density of 4K resolution, the Dell U3223QE is a great choice.
The U3223QE is a 32-inch widescreen monitor with 4K resolution. It offers a large, pixel-dense display that’s great for using four windows in a grid arrangement. The monitor’s high pixel density and strong brightness make code easy to read even when individual windows are small.
Its size and resolution are supported by excellent image quality. This is among the few monitors with an IPS Black panel, which roughly doubles the contrast ratio of a standard IPS panel. The result is a richer, more pleasant image. It also has excellent color accuracy, so it’s great for photo, video, and graphics editing.
The U3223QE is also among the best USB-C monitors available. When connected over USB-C it acts as a feature-rich USB-C hub with multiple USB-A ports, ethernet, audio-out, and DisplayPort-out. It’s perfect for programmers who need to dock a laptop over USB-C.
Need a slightly smaller monitor? Dell also offers the U2723QE, which packs similar features into a 27-inch from factor.
The Asus ProArt PA279CV is an affordable way to snag the benefits of high-end monitors with few sacrifices.
This monitor is a 27-inch widescreen with 4K resolution, offering a reasonably sized and pixel-dense space for viewing multiple windows at once. Its pixel density, which works out to 163 pixels per inch, is as high as you’ll find without upgrading to a more extravagant (and much more expensive) option such as a 5K or 8K display. Image quality is excellent, too, with top-notch color accuracy.
This is a USB-C monitor with 65 watts of Power Delivery and four USB-A ports. Its Power Delivery won’t be enough for high-end laptops but remains adequate for more portable machines, and its USB-A port selection is great for the price.
And what, exactly, is the price? The ProArt PA279CV usually retails for $449.99. That’s a sweet deal for the features and quality it offers.
Need a monitor that’s ideal for programming on a tight budget? The AOC CU34G2X has you covered.
The AOC CU34G2X is a 34-inch curved ultrawide monitor with a resolution of 3440×1440. Its size and resolution are the same as our top pick, the Asus ProArt PA348CGV, so it’s just as useful for programming and multi-tasking.
This monitor uses a VA panel that provides an advantage in contrast ratio and black levels. Its color accuracy and color gamut, though not as good as more expensive alternatives, are more than acceptable in day-to-day use. This monitor supports a 144Hz refresh rate and adaptive sync, making it a solid choice for gaming after the workday is done.
Priced at $399.99 (and often available for less), the CU34G2X is more affordable than most alternatives. This does result in a few sacrifices. It’s not especially bright, so it’s best used in a room with some light control. It also lacks the wide color gamut and great color accuracy found in the ProArt PA348CGV. With that said, its overall image quality is solid and won’t distract from programming.
Programmers often want to use a second monitor—not just for viewing code, but also for managing the wide variety of extra programs (like Slack or Monday) that programmers must use to keep organized and connected. The LG DualUp 28MQ780-B is uniquely suited for this task.
The DualUp 28MQ780-B is a 28-inch monitor with an unusual 16:18 aspect ratio that’s a bit taller than it is wide. It can also rotate 90 degrees, if you’d prefer, to become a bit wider than it is tall. Either way, the monitor is close to square and about as tall as a 32-inch monitor. It also ships with a monitor arm, instead of a desktop stand, which is handy for positioning the monitor next to another display.
Programmers will be pleased with the monitor’s 2560×2880 resolution, which is higher than a 1440p monitor but slightly less than a 4K monitor. The monitor has great image quality with high color accuracy and a wide color gamut. It’s a USB-C monitor, too, providing up to 90 watts of Power Delivery for charging a connected laptop.
Programming doesn’t require a specific type of monitor. Most programmers could be productive on a simple 1080p, 24-inch display. However, there are several features that most programmers will find desirable.
A larger monitor is often better for programming than a smaller one. This includes ultrawide monitors. A larger monitor effectively increases the size of everything on-screen, which in turn can make it easier to see. We think a 27-inch widescreen monitor is a comfortable minimum size to aim for, and all the monitors on this list are at least that large.
There are limits to size, though: A 48-inch display can be uncomfortable to use because it will lack pixel density and may require a lot of head and neck movement to see the corners of the screen.
Programmers will also find high resolutions more useful than lower resolutions.
A higher resolution provides more useable display space because it increases the number of pixels visible. If comparing a 1080p monitor to a 4K monitor, for example, the 4K monitor can literally display four times as many pixels.
Those pixels will also be easier to view and use because a higher resolution improves sharpness. Programmers will find a high-resolution monitor can maintain clarity in extremely small fonts. That’s great when viewing large chunks of code.
A wide range of connectivity, including USB-C, can be useful for programmers. That’s epically true for programmers who use a laptop and frequently dock/undock the laptop throughout the workday.
A USB-C connection can carry video over DisplayPort Alternate Mode and charge a connected laptop with Power Delivery. That makes it a one-cable solution for docking the laptop. Just plug it in and you’re good to go. In many cases, the USB-C monitor will even function as a USB-C hub.
Programing doesn’t require a monitor with good, or even modest, image quality. Functionally, most tasks core to programming would work just as well on a 20-year-old LCD as on a modern display.
However, most programmers find themselves working with or viewing various forms of media occasionally, whether it’s image files for UI elements or textures for a game. This is where superior image quality becomes useful. It will help programmers get a better idea of what the result looks like on a typical user’s display.
Work-from-home programmers will prefer great image quality in day-to-day use. Many use the same monitor for both work and entertainment.
PC World’s monitor reviews are based on rigorous testing by the magazine’s staff and freelance testers.
We use a SpyderXElite color calibration tool to measure the brightness, contrast, color gamut, and accuracy of each monitor. This tool, which can measure quality objectively, lets us directly compare hundreds of monitors.
Our tests also consider whether a monitor supports any special features that deliver it an advantage. We like to see a USB-C hub that includes ethernet connectivity and at least 90 watts of Power Delivery. An ergonomic stand, multiple video inputs, and a useful on-screen menu are desirable, as well.
Decentralized finance (DeFi) is growing fast. Total value locked, a measure of money managed by DeFi protocols, has grown from $10 billion to a little more than $40 billion over the last two years after peaking at $180 billion.
The elephant in the room? More than $10 billion was lost to hacks and exploits in 2021 alone. Feeding that elephant: Today’s smart contract programming languages fail to provide adequate features to create and manage assets — also known as “tokens.” For DeFi to become mainstream, programming languages must provide asset-oriented features to make DeFi smart contract development more secure and intuitive.
Solutions that could help reduce DeFi’s perennial hacks include auditing code. To an extent, audits work. Of the 10 largest DeFi hacks in history (give or take), nine of the projects weren’t audited. But throwing more resources at the problem is like putting more engines in a car with square wheels: it can go a bit faster, but there is a fundamental problem at play.
The problem: Programming languages used for DeFi today, such as Solidity, have no concept of what an asset is. Assets such as tokens and nonfungible tokens (NFTs) exist only as a variable (numbers that can change) in a smart contract such as with Ethereum’s ERC-20. The protections and validations that define how the variable should behave, e.g., that it shouldn’t be spent twice, it shouldn’t be drained by an unauthorized user, that transfers should always balance and net to zero — all need to be implemented by the developer from scratch, for every single smart contract.
Related: Developers could have prevented crypto's 2022 hacks if they took basic security measures
As smart contracts get more complex, so too are the required protections and validations. People are human. Mistakes happen. Bugs happen. Money gets lost.
A case in point: Compound, one of the most blue-chip of DeFi protocols, was exploited to the tune of $80 million in September 2021. Why? The smart contract contained a “>” instead of a “>=.”
For smart contracts to interact with one another, such as a user swapping a token with a different one, messages are sent to each of the smart contracts to update their list of internal variables.
The result is a complex balancing act. Ensuring that all interactions with the smart contract are handled correctly falls entirely on the DeFi developer. Since there are no innate guardrails built into Solidity and the Ethereum Virtual Machine (EVM), DeFi developers must design and implement all the required protections and validations themselves.
Related: Developers need to stop crypto hackers or face regulation in 2023
So DeFi developers spend nearly all their time making sure their code is secure. And double-checking it — and triple checking it — to the extent that some developers report that they spend up to 90% of their time on validations and testing and only 10% of their time building features and functionality.
With the majority of developer time spent battling unsecure code, compounded with a shortage of developers, how has DeFi grown so quickly? Apparently, there is demand for self-sovereign, permissionless and automated forms of programmable money, despite the challenges and risks of providing it today. Now, imagine how much innovation could be unleashed if DeFi developers could focus their productivity on features and not failures. The kind of innovation that might allow a fledgling $46 billion industry to disrupt an industry as large as, well, the $468 trillion of global finance.
The key to DeFi being both innovative and safe stems from the same source: deliver developers an easy way to create and interact with assets and make assets and their intuitive behavior a native feature. Any asset created should always behave predictably and in line with common sense financial principles.
In the asset-oriented programming paradigm, creating an asset is as easy as calling a native function. The platform knows what an asset is: .initial_supply_fungible(1000) creates a fungible token with a fixed supply of 1000 (beyond supply, many more token configuration options are available as well) while functions such as .take and .put take tokens from somewhere and put them elsewhere.
Instead of developers writing complex logic instructing smart contracts to update lists of variables with all the error-checking that entails, in asset-oriented programming, operations that anyone would intuitively expect as fundamental to DeFi are native functions of the language. Tokens can’t be lost or drained because asset-oriented programming guarantees they can’t.
This is how you get both innovation and safety in DeFi. And this is how you change the perception of the mainstream public from one where DeFi is the wild west to one where DeFi is where you have to put your savings, as otherwise, you’re losing out.
Ben Far is head of partnerships at RDX Works, the core developer of the Radix protocol. Prior to RDX Works, he held managerial positions at PwC and Deloitte, where he served clients on matters relating to the governance, audit, risk management and regulation of financial technology. He holds a bachelor of arts in geography and economics and a master’s degree in mapping software and analytics from the University of Leeds.
The author, who disclosed his identity to Cointelegraph, used a pseudonym for this article. This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.