“The overwhelming majority of successful innovations exploit change”Peter Drucker
What is a run-time?
Deno is fairly new so there will be a ton of changes to it but the run-time itself and how it works will likely not change much at all. Learning the fundamentals will lend you a ton of skills later on to lean on when troubleshooting or reviewing changes.
More importantly, understanding how deno works will extend your understanding of how node works – so don’t dismiss this article just yet. It’s jam packed with knowledge bombs about node.
It would be hard to imagine that you haven’t heard “v8 engine” but easy to understand (and believe) if you didn’t really know anything about it. Today, that changes.
Deno run-time is a system also called a runtime environment. Imagine that it’s a car that has both an engine (v8 engine), a transmission (rusty_v8), a gas tank (tokio project). Below we will break down all the parts into easily consumable questions and answers. I hope you enjoy!
The V8 Engine
V8 was created by google in 2008. to use with chromium’s open source project (it wasn’t created as open source for altruistic reasons – more so to get a better market share of the browser & search engine space).
How does parsing work?
From there, a interpreter, profiler, and compiler spit out optimized [machine] code or bytecode to your device and runs. Within the interpreter is a call stack and a memory heap (there is a gif explainer on the callstack/memory heap below).
What is an interpreter?
Basically, it takes a set of instructions for the engine to run step-by-step-and-in-order for a desired outcome. These instructions are interpreted into bytecode.
What is a compiler?
A compiler doesn’t read and execute on the fly line-by-line like an interpreter, instead what it does is it makes a full pass through the code and then writes a new program in a new language. A compiler that a front end developer may be familiar with is babel.
It’s important to write predictable code for your compiler not just for people so that the engine’s compiler can optimize the delivery of it.
Compilers vs Interpreters?
In some respects all languages on the web need some level of interpretation or compilation but why would you use one over the other?
Compilers may take a little more time to run initially, but it will create machine code that will simplify the code and save resources. Compilers are used for optimizations.
Key take away is that we can get the best of both worlds combining the two into something called a “JIT” compiler.
What is a JIT Compiler?
JIT compilers are also known as “Just In Time” compilers & for v8 it’s called “turbo fan”. You’ll start to see a pattern in their car metaphors.
Remember how I said the code goes from the AST to the interpreter, profiler, and compiler? In v8, the code initially goes to the interpreter and this is called “ignition” – see what I mean? What it spits out is bytecode.
What is a Profiler?
The profiler also known as a monitor watches the code the interpreter creates and how it runs. It monitors how it can be optimized. The profiler selects code from the interpreter to compile and will continuously improve the execution and speed of the output from the engine. This ensures the fastest code possible.
Side note: In v8 there is actually two JIT compilers.
Why not just use Machine code from the onset?
At the end of the day, it comes down to adoption of languages but on a more technical note – it’s due to WebAssembly (Wasm).
“Wasm is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications” and is adopted by all the big browsers.
To understand deno under the hood, you’d first need to understand node under the hood.
How does node work under the hood?
How Does Node use the “Call stack”? Well, if you haven’t already, I highly recommend looking into my article on Event Loops and Callbacks where I explain how it handles calls. Everything with deno is similar – but different.
In the above example you are seeing the LIBUV in action which is how Node API handles asynchronous code and uses binding to communicate from LIBUV back to the V8 engine. If you are confused, don’t worry, I’ll go into further details.
The LIBUV executes commands based on the call-stack which in turn kicks it back to the node.js bindings after they are processed in their respective queue and back to the v8 engine.
How is Deno different?
What deno is doing under the hood is essentially the same. It’s creating and starting a process. When you open an app, you are creating/running a process. What happens is it basically gives you a sandbox with memory and boundaries that you can run a program within.
From a high level, these two run-times do essentially the same things – how they do them is slightly different mostly in who does the work, how secure they are, and how much faster each is paving the way for a promising future for deno.
What is rusty_v8?
Head over to the deno repo we can see some of the code that makes this all happen.
You can see, around line 32, deno’s bindings which are available on start. These allow it to access the
window.deno.core and send/receive code really quickly.
deno.core.rcv to pull messages from rust. This makes deno fully featured – you can access ‘all-the-things’. In the rust language, the things we can do are called ops or operations – just like syscalls, they are operations we need to perform on the computer to run the task.
Let’s pretend we make a request – in order to run multiple operations at the same time we need something called the event loop which allows us to run events (much like node). There is something called the Tokio Project which allows us to create a thread pool and workers to run commands for us much like LIBUV.
Why did deno use Tokio instead of LIBUV?
Tokio is a rust module, which works with the future abstraction. LIBUV is c and would necessitate building a bridge to run futures.Ryan Dahl – https://github.com/denoland/deno/issues/2340
Tokio is an event-driven, non-blocking I/O platform for writing asynchronous applications with the Rust programming language. At a high level, it provides a few major components:
- A multithreaded, work-stealing based task scheduler.
- A reactor backed by the operating system’s event queue (epoll, kqueue, IOCP, etc…).
- Asynchronous TCP and UDP sockets.
These components provide the runtime components necessary for building an asynchronous application.
Bringing it all together
Deno.[command] for example )we are going to be using the rust backend – once the task is received at the thread pool from the Tokio Project, the thread pool then queues those jobs up, processes them in rust and then sends them back to rusty_v8 to be processed by the v8 engine.
Then the node.js bindings (aka the node api) makes a call to the event loop called LIBUV (which is exactly what the Tokio Project does).
The LIBUV executes commands based on the call-stack which in turn kicks it back to the node.js bindings after they are processed back to the v8 engine.
Image source: v8.dev(opens in a new tab)
In my next article we will talk about deno’s main benefits specifically security.
If you found this article helpful, give me a shout on twitter – I’d love to hear from you. @codingwithdrewk. As always, if you found any errors, just highlight it and mash that “R” button on the right side of the screen and I’ll get those fixed right up!
Leave a Reply