Tutorials on Node.js

Learn about Node.js from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

How to Fix the Error Error:error:0308010C:digital envelope routines::unsupported

If you are running Webpack or a CLI tool that’s built on top of Webpack (e.g., react-scripts for Create React App applications or vue-cli-service for Vue applications) with version 17 of Node.js, then you may have come across the following error: With Node.js v17+ supporting OpenSSL 3.0 , algorithms like MD4 have been relegated to OpenSSL 3.0’s legacy provider. A provider is a collection of cryptographic algorithm implementations. OpenSSL 3.0 comes with five standard providers : default, legacy, FIPS, base and null. The legacy provider consists of algorithms that are considered to be rarely used in today’s world or unsafe security-wise. This provider exists for backwards compatibility purposes (for software that still rely on these algorithms) and is not loaded by default. Webpack creates hashes using the crypto.createHash() method of the Node.js crypto module. This method can only create hashes with algorithms that are available and supported by the version of OpenSSL corresponding to the currently installed Node.js version. Since Webpack specifies to the crypto.createHash() method to use the MD4 algorithm (see below code), and this algorithm is not readily available in Node.js v17+ due to OpenSSL 3.0 not loading legacy providers by default, Webpack errors out and Node.js logs the error message Error: error:0308010C:digital envelope routines::unsupported . ( https://github.com/webpack/webpack/blob/main/lib/util/createHash.js ) To fix this error, you can do one of five things: If you are running Node.js via nvm , then you can install Node.js v16. Once the installation finishes, nvm automatically switches the current version of Node.js to the newly installed version of Node.js. Note : Specifying 16 installs the latest LTS version with a major version of 16, which happens to be, as of the publication of this article, 16.16.0. Note : The Node.js version can be any version less than 17, but it's highly recommended to stick with Node.js versions that are under active or maintenance LTS status . Run node -v && npm -v to verify the versions of Node.js and npm running on your machine. Then, delete the node_modules folder and re-install the project's dependencies. Similarly, if you are running Node.js via Volta , then you can also install Node.js v16 the same way. Note : For convenience, you can save this exact version of Node.js and npm to the project via the volta pin node@16 command. Anytime you enter the project directory and run Node.js, Volta automatically switches the current version of Node.js to the pinned version of Node.js. Run node -v && npm -v to verify the versions of Node.js and npm running on your machine. Then, delete the node_modules folder and re-install the project's dependencies. Introduced in Node.js v17 alongside support for OpenSSL 3.0, the --openssl-legacy-provider flag tells Node.js to revert to OpenSSL 3.0's legacy provider. This allows you to run tools like Webpack that still create hashes with legacy cryptographic algorithms like MD4. Here are some examples of how to pass this flag: If you have multiple CLI tools that depend on legacy cryptographic algorithms, then you can set the NODE_OPTIONS environment variable to --openssl-legacy-provider instead of passing the --openssl-legacy-provider flag to each of these tools. For MacOSX and Unix, run the following command before running anything else: For Windows, run the following command before running anything else: Alternatively, you could set the environment variable directly within an npm script of a package.json file, like so: With npm-run-all , all of the executed npm scripts receive the NODE_OPTIONS environment variable. Note : For cross-platform compatibility, set the environment variable via a CLI tool like cross-env . For an older Create React App project that runs react-script v4.0.3 (the version before v5.0.0 ), there are three files across two dependencies that use the MD4 algorithm for creating hashes: Upon patching these files, the Create React App application runs successfully with Node.js v17+. However, this approach is highly discouraged. You would need to: For a Webpack project, you can apply the following patch to redirect requests for creating hashes with the MD4 algorithm to creating hashes with the MD5 algorithm instead, like so: ( https://github.com/webpack/webpack/blob/main/lib/util/createHash.js ) Note : Overriding the createHash() method of the crypto module via the above solution was originally suggested by Alexander Akait , a core contributor of Webpack. For Create React App projects, check the installed version of react-scripts . If the version is less than v5.0.0, then upgrade the version of react-scripts to v5.0.0 or higher. In v5.0.0 of Create React App, the version of the cached Webpack modules and chunks gets generated using a stringified object of environment variables and MD5 algorithm . ( https://github.com/facebook/create-react-app/blob/main/packages/react-scripts/config/webpack/persistentCache/createEnvironmentHash.js ) Want to learn about Vue 3, the Composition API and building real-world, production-ready applications with Vue 3? Check out our book Fullstack Vue 3 :

Thumbnail Image of Tutorial How to Fix the Error Error:error:0308010C:digital envelope routines::unsupported

Building Your First ASP.NET Core RESTful API for Node.js Developers - Introduction (Part 1)

Over the past decade, many developers started their backend development journey with Node.js. What makes Node.js compelling to developers is the benefit of creating client-side and server-side applications with a single programming language: JavaScript. This convenience, along with the growing interest in frameworks written with modern programming languages like Golang and Rust, means more developers are less likely to branch out to older, more established technologies like ASP.NET. Developed and maintained by Microsoft, ASP.NET ( A ctive S erver P ages N etwork E nabled T echnologies) is a framework for creating dynamic web applications and services on the .NET platform . You can write ASP.NET applications with any .NET programming language: C#, F# or Visual Basic. ASP.NET is widely used across many industries, most notably, by large corporations and government agencies. Despite Microsoft's efforts to adapt ASP.NET to the rapidly evolving web development landscape, such as the release of ASP.NET MVC in 2009 in response to the popularity of MVC frameworks like Django and Ruby on Rails, ASP.NET continued to suffer from several limitations: To migrate away from ASP.NET's monolithic design, Microsoft re-implemented ASP.NET as a modular, cross-platform compatible, open-source framework named ASP.NET Core . Released in 2016 , ASP.NET Core comes with built-in support for dependency injection and provides... With these features, you can build and run lightweight web applications on Windows , Linux and macOS via the .NET Core runtime. In fact, developers can host their web applications not just on IIS, but also, Nginx, Docker, Apache and much more. This all makes ASP.NET Core applications suitable for containerization and optimized for cloud-based environments. For any missing functionality, you can fetch them as packages from NuGet . As the package manager for .NET, NuGet is equivalent to npm for Node.js. At its core, an ASP.NET Core application is a self-contained console application that self-hosts a web server (by default, the cross-platform web server Kestrel ), which processes incoming requests and passes them directly to the application. Once it finishes handling a request, the application passes the response to the web server, which sends the response directly to the client (or reverse proxy). Keeping the web server independent of the application this way makes testing and debugging much simpler, especially when compared to previous versions of ASP.NET where IIS directly executes the application's methods. So why might Node.js developers consider learning ASP.NET Core? In the latest round of TechEmpower benchmark, ASP.NET Core significantly outperforms Node.js, sending back almost nine times more plaintext responses per second. Below, I'm going to show you how to build your first ASP.NET Core RESTful API with C#, a strongly-typed, object-oriented language. Throughout this tutorial, I will relate concepts and patterns to those that you may have already encountered in an Express.js RESTful API. To get started, verify that you have the latest LTS version of the .NET Core SDK, v6, installed on your machine. The .NET Core SDK (Software Development Kit) consists of everything you need to create and run .NET applications: If your machine does not have the .NET Core SDK installed, then download the latest LTS version of the .NET Core SDK for your specific platform and follow the installation directions. Once installed, create a new directory named weather-api . Then, within this directory, create a new solution file: A solution file lists and tracks all of the projects that belong to a .NET Core application. For example, the application may include an ASP.NET Core Web API project, several class libraries (for directly interfacing with databases via the Entity Framework) and an ASP.NET Core with React.js project. With a solution file, the dotnet CLI knows which projects to restore NuGet packages for ( dotnet restore ), build ( dotnet build ) and test ( dotnet test ) in your application. In this case, you will find a weather-api.sln file within the root of the project directory. Let's create a new ASP.NET Core Web API project: The dotnet new command scaffolds a new project or file based on a specified template , such as sln for a solution file and webapi for an ASP.NET Core Web API. The -n option tells the dotnet new command the name of the outputted project/file. In this case, you will find the ASP.NET Core Web API project located within an API directory. Let's add this project to the solution file: You can verify that the project has been added to the solution file by running the following command, which lists all of the projects added to the solution file: If you open the solution file, then you will find the "API" project listed with a project type GUID ( FAE04EC0-301F-11D3-BF4B-00C04F79EFBC for C#), a reference to the project's .csproj file and a unique project GUID. Let's restore the project's NuGet packages. For Node.js developers, this is similar to running npm install / yarn install on a freshly cloned Git repository to reinstall dependencies. If you are building this project on macOS, then you can find the NuGet packages in the ~/.nuget/packages directory. These packages relate to the package referenced in the API/API.csproj : Swashbuckle.AspNetCore . Swashbuckle.AspNetCore sets up Swagger for ASP.NET Core APIs. You can check the project's obj/project.nuget.cache file for absolute paths to the project's NuGet packages. Let's take a look at the three C# files in the API directory: Much like the index.js file of a simple Express.js RESTful API, the Program.cs file also bootstraps and starts up a RESTful API, but for ASP.NET Core. It follows a minimal hosting model that consolidates the Startup.cs and Program.cs files from previous ASP.NET versions into a single Program.cs file. Plus, the Program.cs file now makes use of top-level statements and implicit using directives to eliminate extra boilerplate code like the class with a Main method and using directives respectively. As you can see in the Program.cs file, setting up and running a RESTful API with ASP.NET Core requires significantly less code than previous ASP.NET versions. ( Program.cs ) Program.cs begins with instantiating a new WebApplicationBuilder , a builder for web applications and services. WebApplicationBuilder follows the builder pattern , which breaks down the construction of a complex object into multiple, distinct steps. This means that we delay the creation of the builder object ( var app = builder.Build() ) until we finish configuring it. Upon instantiation, the builder object comes with preconfigured defaults for several properties : Alongside these preconfigured defaults, we explicitly register additional services to the built-in DI ( d ependency i njection) container with WebApplicationBuilder.Services . This DI container simplifies dependency injection in ASP.NET Core (i.e., automatically resolves dependencies and manages their lifetimes) and is responsible for making all registered services available to the entire application. Here, the following methods get called on builder.Services : After registering these services, call the builder object's Build() method to build the WebApplication (host) with these configurations. By default, the WebApplication uses Kestrel as the web server. Then, we check if the application is running within a development environment, and if so, then add Swagger middleware ( UseSwagger() and UseSwaggerUI() ) to the application's middleware pipeline. Notice how these Use{Feature} extension methods that add middleware are prefixed with Use , which is similar to how Express.js calls app.use() to mount middleware functions. Calling the UseSwaggerUI() method automatically enables the static file middleware . Express.js also provides a built-in middleware function for serving static assets ( app.use(express.static("<path_to_static_files>")) ). The remaining middleware gets applied to all requests regardless of environment: After all of this middleware gets added to the middleware pipeline, we call the MapControllers() method to automatically create an endpoint for each of the application's controller actions and add them to the IEndpointRouteBuilder . This method saves us the trouble of having to explicitly define the routes ourselves. Lastly, we call the Run() method to run the application. So to start up the ASP.NET Core RESTful API, run the dotnet run command, which runs the project in the current directory, within the API directory. Note : If our application consisted of multiple projects, then you can specify which project you want to run by passing a --project option to dotnet run (e.g., dotnet run --project API to just run the API project) without having to change the current directory. When you run this command, you may come across the following error message: If you do, then follow the directions in the error message. Run the dotnet dev-certs https --clean command to remove all existing ASP.NET Core development certificates, and run the dotnet dev-certs https to create a new untrusted developer certificate. To trust this certificate, run the command dotnet dev-certs https --trust . Then, re-run the dotnet run command and the error message should no longer pop up. Alternatively, you can remove https://localhost:7101; from applicationUrl in the API/Properties/launchSettings.json file, which stores profiles that tell ASP.NET Core how to run a specific project. Within a browser, you can visit the Swagger documentation at http://localhost:5077/swagger . Here, you will find that the RESTful API comes with only a single endpoint: GET /WeatherForecast . If you expand the endpoint's accordion item, then a summary of the endpoint will appear: This summary provides an example response (status code, value, etc.) for the endpoint. If you test this endpoint by visiting http://localhost:5077/WeatherForecast in the browser, or sending a GET request to http://localhost:5077/WeatherForecast via a REST client like Postman or a CLI utility like cURL , then you will get a response that contains four weather forecasts. To see how the RESTful API handles requests to the GET /WeatherForecast endpoint, open the Controllers/WeatherForecastController.cs file. ( WeatherForecastController.cs ) If you have developed an Express.js RESTful API, then you should be familiar with the concept of controllers. After all, route callback functions act as controllers. To understand how this file works, let's first take a look at the [ApiController] attribute . This attribute tells ASP.NET Core that the controller class will opt-in to using opinionated, commonly-used API functionality like multipart/form-data request inference and automatic HTTP 400 responses . A route attribute ( [Route("[controller]")] ) is placed on the controller and coerces all controller actions to use attribute routing . The [controller] token in the route attribute expands to the controller's name, so the controller's base URL path is /{controller_name} , or in this case, /WeatherForecast . This means the URL path of /WeatherForecast can match the WeatherForecast.Get() action method. Since this action method is marked with the HttpGet attribute, only GET requests to /WeatherForecast will run this action method. When declaring it, a controller class in ASP.NET Core RESTful APIs should derive from the ControllerBase class, which provides the properties and methods needed for processing any HTTP request. This controller only contains one action method, WeatherForecast.Get() . This action method returns four weather forecasts. Each weather forecast is created with the WeatherForecast model that's defined in the API/WeatherForecast.cs file: ( WeatherForecast.cs ) This model represents the shape of a weather forecast's data. In the WeatherForecast.Get() action method, we pass several values to the model: And the model automatically populates each property accordingly. Proceed to the second part of this tutorial series to see how to add your own endpoints to this RESTful API.

Thumbnail Image of Tutorial Building Your First ASP.NET Core RESTful API for Node.js Developers - Introduction (Part 1)

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $30 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Building an API using Firebase Functions for cheap

When I am working on personal projects, I often find the need to setup an API that serves up data to my app or webpages. I get frustrated when I end up spending too much time on hosting and environment issues. These days what I end up doing is hosting the API using Cloud Functions for Firebase . It hits all my requirements: The official name is Cloud Functions for Firebase. In this article, I am going to call it Firebase Functions. This is mostly to distinguish it from Google's other serverless functions-as-a-service: Cloud Functions. You can read more about the differences here . From that page: While I'm not going to write a mobile app in this article, I like to use Firebase Functions because: If all this isn't confusing enough, Google is rolling out a new version of Cloud Functions called 2nd generation which is in "Public Preview". So in order to move forward, let's identify our working assumptions: After all this is complete, you should have a single file called firebase.json and a directory called functions . The functions directory is where we'll write our API code. We'll take the emulator out for a spin. Congrats, you have Firebase Functions working on your local system! To exit the emulator, just type 'Ctrl-C' at your terminal window. This is all very exciting. Let's push our new "hello world" function into the cloud. From the command line type: The output should look similar, but not exactly to: And if we navigate to the Function URL we should get the 'Hello from Firebase!' message. Exciting! Do you see how easy it is to create Firebase Functions? We've done all the hard part of setting up our local environment and the Firebase project. Let's jump into creating an API using Express Install express: Next, edit the index.js file to look like: Then if you run You can load up your api locally. Note the URL link on the emulator is a little different -- it should have 'api' added at the end like: You should see our 'Hello World' message. Now for more fun, add '/testJSON' to the end of your link. You should see the browser return back JSON data that our API has sent: Now finally, let's deploy to the cloud: Note that when you try to deploy, Firebase is smart enough to detect that major changes to the URL structure have occurred. You'll need to verify that you did indeed make these changes and everything is ok. Since this is a trivial function, you can type Yes . Firebase will delete the old function we deployed earlier and create a new one. Once that completes, try to load the link and validate your API is now working! This article has walked you through the basics of using Firebase Functions to host your own API. The process of writing and creating a full featured API is beyond the scope of this article. There are many resources out there to help with this task, but I hope you'll think about Firebase Functions next time you are starting a project.

Thumbnail Image of Tutorial Building an API using Firebase Functions for cheap

Introducing Volta - it manages your Node.js versions so you don't have to

Web development is tough enough as it is, something as mundane as mismatched versions of Node in development versus production shouldn't be another thing you have to keep in mind. Volta can prevent this sort of issue and so much more for you and your dev team automatically, and it's easy to set up to boot. Read on to get started using it yourself.Photo by Felix Mittermeier on Unsplash When you're working with a team of developers, especially on a team responsible for managing multiple applications, you very well might have JavaScript apps that run on different versions of Node.js. Some might use Node 10, others Node 12, some may use Yarn as their package manager, others might use npm - and keeping track of all that is really hard. Ensuring every developer on the team is developing with the correct versions all the time is even harder. But it's essential. While the consequences might be relatively minor during local development: it works on one dev's machine and throws an error on another's, this sort of lack of standardization and clarity can have devastating effects when it comes to production. And it could have all been avoided if we'd been using a handy little tool called Volta.  I want to introduce Volta to you today so you can avoid the stress we went through - it's simple to get started with and can prevent catastrophes like this. What this means in practice is that Volta makes managing Node, npm, yarn, or other JavaScript executables shipped as part of packages, really easy. I've told you what Volta is, but you're probably still wondering why I chose it in particular - it's certainly not the only game in town. NVM's another well known option for managing multiple versions of Node. I used to use Node Version Manager (NVM)  myself. Heck, I even wrote a whole blog post about how useful it was. NVM is good, it does exactly what it sounds like: it allows you to easily download and switch versions of Node.js on your local machine. While it does make this task simpler, NVM is not the easiest to setup initially, and, more importantly, the developer using it still has to remember themselves to switch to the correct version of Node for the project they're working on. Volta, on the other hand, is easy to install and it takes the thinking part out of the equation: once Volta's set up in a project and installed on a local machine, it will automatically switch to the proper versions of Node. Yes, you heard that right. Similar to package managers, Volta keeps track of which project (if any) you’re working on based on your current directory. The tools in your Volta toolchain automatically detect when you’re in a project that’s using a particular version of the tools and takes care of routing to the right version of the tools for you. Not only that, but it will also let you define yarn and npm versions in a project, and if the version of Node defined in a project isn't downloaded locally, Volta will go out and download the appropriate version. But when you switch to another project, Volta will defer to any presets in that project or revert back to the default environment variables. Cool, right? Ready to see it in action? For ease of getting started, let's create a brand new React application with Create React App, then we'll add Volta our local machine and our new project. First things first, create a new app. Run the following command from a terminal. Once you've got your new React app created, open up the code in an IDE, and start it up via the command line. If everything goes according to plan, you'll see the nice, rotating React logo when you open up a browser at http://localhost:3000 . Now that we've got an app, let's add Volta to it. Installing Volta to your development machine is a piece of cake - no matter your chosen operating system. Unix If you're using a Unix based system (MacOS, Linux or the Windows Subsystem for Linux  - WSL) to install Volta, it's super easy. In a terminal, run the following command: Windows If you've got Windows, it's almost this easy. Download and run the Windows installer and follow the instructions. Once Volta's finished downloading, double check it installed successfully by running this command in your terminal: Hopefully, you'll see a version for Volta like my screenshot below. If you don't try quitting your terminal completely, re-opening a new terminal session and running that command again. The current version of Volta on my machine is now v1.0.5. Before we add our Volta-specific Node and npm versions to our project, let's see what the default environment variables are. Get a baseline reading In a terminal at the root of your project, run the following line: For me, my default versions of Node and npm are v14.18.1 and v6.14.15, respectively. With our baseline established, we can switch up our versions just for this project with Volta's help. Pin a node version We'll start with Node. Since v16 is the current version of Node, let's add that to our project. Inside of our project at the root level where our package.json  file lives, run the following command. Using volta pin [JS_TOOL]@[VERSION]  will put this particular JavaScript tool at our specified version into our app's package.json . After committing this to our repo with git, any future devs using Volta to manage dependencies will be able to read this out of the repo and use the exact same version. With Volta we can be as specific or generic as want defining versions, and Volta will fill in any gaps. I specified the major Node version I wanted (16) and then Volta filled in the minor and patch versions for me. Pretty nice! When you've successfully added your Node version, you'll see the following success message in your terminal: pinned [email protected] in package.json (or whatever version you pinned). Pin an npm version That was pretty straightforward, now let's tackle our npm version. Still in the root of our project in the terminal, run this command: In this particular instance, I didn't even specify any sort of version for npm, so Volta defaults to choosing the latest LTS release to add to our project. Convenient.  The current LTS version for npm is 8, so now our project's been given npm v8.1.0 as its default version. To confirm the new JavaScript environment versions are part of our project, check the app's package.json  file. Scroll down to the bottom and you should see a new property named "volta" . Inside of the  "volta" property should be a "node": "16.11.1"  and an "npm": "8.1.0"  version. From now on, any dev who has Volta installed on their machine and pulls down this repo will have their settings for these tools automatically switch to use these particular node and npm versions. To make doubly sure, you can also re-run the first command we did before pinning our versions with Volta to see what our current development environment is now set to. After this, your terminal should tell you it's using those same versions: Node.js v16 and npm v8. Now, you can sit back and let Volta handle things for you. Just like that. 😎 If you want to see what happens when there's nothing specified for Volta (like when you're just navigating between repos or using your terminal for shell scripts), try navigating up a level from your project's root and checking your Node and npm versions again. In the screenshot below, I opened two terminals side by side: the one of the left is inside of my project with Volta versions, the one on the right is a level higher in my folder structure. I ran the following command in both: And in my project, Node v16 and npm v8 are running, but outside of the project, Node v14 and npm v6 are present. I did nothing but switch directories and Volta took care of the rest. Try and tell me this isn't cool and useful. I dare you. 😉  Building solid, stable apps is tough enough without having to also keep track of which versions of Node, yarn and npm each app runs best with. By using a tool like Volta, we can take the guesswork out of our JavaScript environment variables, and actually make it harder for a member of the dev team to use the wrong versions than the right ones. And remember to double check your local Node version matches your production server's Node version, too. In 10 modules and 54 lessons, I cover all the things I learned while at The Home Depot, that go into building and maintaining large, mission-critical React applications - because it's so much more than just making the code work. From tooling and refactoring, to testing and design system libraries, there's a ton of material and hands-on practice here to prepare any React developer to build software that lives up to today's high standards. I hope you'll check it out.

Thumbnail Image of Tutorial Introducing Volta - it manages your Node.js versions so you don't have to

Publishing Packages to NPM

npm centralizes third-party, open-source Node.js packages and libraries within a large, online registry. Contributing to the Node.js ecosystem involves no vetting process, which lets anyone publish packages to the npm registry with little effort. Not only has npm's short process for publishing packages led to the explosive growth of the Node.js ecosystem, but also fosters the development of various types of packages: front-end libraries/frameworks, tooling, bundlers, routers, state management, etc. However, this comes at the cost of more packages being released with more security vulnerabilities and less reliability. Despite these concerns, npm continues to introduce new features and statistics for helping developers identify high quality packages. A library author uses npm's command line client to publish their library's package to the npm registry and share it. Once published, npm allows developers to update their projects' dependencies with the latest version of this package or install this package within their projects. Below, I'm going to show you how to publish a package to the npm registry. I will demonstrate this with the rgb-hex TypeScript library. It will be modified accordingly to get it ready for publishing. To get started, you must have an npm account. If you do not have an npm account, sign up for an account here . Within the root of the package directory, run the following command in the terminal: Logging into your account associates your package with your account. You will be prompted to enter your npm username, password and e-mail address. When you publish a package to the npm registry, there are some files and directories, such as a testing suite and coverage reports, that can be omitted from the package. A testing suite validates your library's functionality and coverage reports inform you of areas in your code that lack tests. They are not required for end users to consume your library within their projects. The main benefit of excluding files with .npmignore is reducing the number of files and directories the end user downloads when fetching your package from npm. Ideally, end users should be able to quickly download your package, and your package should not take up unnecessary space on their machines. If your library uses the Jest testing framework, then you would add the __tests__ and coverage directories to the .npmignore file. ( .npmignore ) To further reduce the number of files within the package, you can exclude formatting-related configuration files, such as .eslintrc.js , .prettierrc and .editorconfig . ( .npmignore ) Just think about which files and directories are needed within the package to allow end users to use your library. Whichever files and directories are not necessary should be added to the .npmignore file. Note : If the project contains a .gitignore file, then the files and directories listed within the .gitignore file will automatically be excluded from the package. Alternatively, you could list the files to include within the package via package.json 's files property. The entry point of your library indicates the file from which execution begins when the library is imported. By default, npm searches for a main property inside of the package.json file to determine the package's entry point. For tooling that supports ESM modules, you can define a module property that points to the package's .mjs file. Note : module is not an official package.json property. It is a proposal for ES6 module interoperability in Node.js. Read more about it here . Commonly, your library's generated build will be outputted to a build or dist directory. Therefore, the main and module properties should point to files within either of these directories. ( package.json ) To verify the package's contents before publishing to npm (and whether or not the .gitignore and .npmignore files filter out the correct files and directories), create a tar archive of the package. This tar archive contains all of the files and directories that will end up in the published package. To generate the tarball, run the following command: In the current directory, you will find a tar file named <package-name>-v<version>.tgz . package-name comes from the name property of package.json , and version comes from the version property of package.json . To extract and list its contents, run the following command: The tar -xzf command deposits the contents into a directory named package . Here, we can see that the package.json file, README.md file and dist directory are included. Lastly, to publish the package to npm, run the following command: When prompted to enter a version, press enter to use the version mentioned in the package.json file. If you run into the following error message, then you will need to enable two-factor authentication for your npm account: To enable two-factor authentication, visit your npm account's "Account Settings" page and click the "Enable 2FA" button under the "Two Factor Authentication" section. After you enter your password, npm redirects you to a wizard that walks through the process of enabling two-factor authentication. Enable two-factor authentication for both authorization and updating/publishing packages. Scan the QR code with an authenticator app like Authy . Verify that Authy successfully registered npm by entering a six-digit code generated by Authy. Once two-factor authentication is successfully enabled, you will be shown recovery codes. Save them to a new, empty text file. Without these codes, it will not be possible to recover your account in the event that you are not able to provide the one-time password. Return to the terminal and re-enter the npm publish command. Once your package has been published to npm, you can navigate to your package's page at npm's website. Try publishing your own packages to npm. You can also check out our new course, The newline Guide to Creating React Libraries from Scratch , where we teach you everything you need to know to succeed in creating a library. 

Thumbnail Image of Tutorial Publishing Packages to NPM

Deploying a Node.js and PostgreSQL Application to Heroku

Serving a web application to a global audience requires deploying, hosting and scaling it on reliable cloud infrastructure. Heroku is a cloud platform as a service (PaaS) that supports many server-side languages (e.g., Node.js, Go, Ruby and Python), monitors application status in a beautiful, customizable dashboard and maintaining an add-ons ecosystem for integrating tools/services such as databases, schedulers, search engines, document/image/video processors, etc. Although it is built on AWS, Heroku is simpler to use compared to AWS. Heroku automatically provisions resources and configures low-level infrastructure so developers can focus exclusively on their application without the additional headache of manually setting up each piece of hardware and installing an operating system, runtime environment, etc. When deploying to Heroku, Heroku's build system packages the application's source code and dependencies together with a language runtime using a buildpack and slug compiler to generate a slug , which is a highly optimized and compressed version of your application. Heroku loads the slug onto a lightweight container called a dyno . Depending on your application's resource demands, it can be scaled horizontally across multiple concurrent dynos. These dynos run on a shared host, but the dynos responsible for running your application are isolated from dynos running other applications. Initially, your application will run on a single web dyno, which serves your application to the world. If a single web dyno cannot sufficiently handle incoming traffic, then you can always add more web dynos. For requests exceeding 500ms to complete, such as uploading media content, consider delegating this expensive work as a background job to a worker dyno. Worker dynos process these jobs from a job queue and run asynchronously to web dynos to free up the resources of those web dynos. Below, I'm going to show you how to deploy a Node.js and PostgreSQL application to Heroku. First, let's download the Node.js application by cloning the project from its GitHub repository: Let's walkthrough the architecture of our simple Node.js application. It is a multi-container Docker application that consists of three services: an Express.js server, a PostgreSQL database and pgAdmin. As a multi-container Docker application orchestrated by Docker Compose , the PostgreSQL database and pgAdmin containers are spun up from the postgres and dpage/pgadmin4 images respectively. These images do not need any additional modifications. ( docker-compose.yml ) The Express.js server, which resides in the api subdirectory, connects to the PostgreSQL database via the pg PostgreSQL client. The module api/lib/db.js defines a Database class that establishes a reusable pool of clients upon instantiation for efficient memory consumption. The connection string URI follows the format postgres://[username]:[password]@[host]:[port]/[db_name] , and it is accessed from the environment variable DATABASE_URL . Anytime a controller function (the callback argument of the methods app.get , app.post , etc.) calls the query method, the server connects to the PostgreSQL database via an available client from the pool. Then, the server queries the database, directly passing the arguments of the query method to the client.query method. Once the database sends the requested data back to the server, the client is released back to the pool, available for the next request to use. Additionally, there's a getAllTables method for retrieving low-level information about the tables available in our PostgreSQL database. In this case, our database only contains a single table: cp_squirrels . ( api/lib/db.js ) The table cp_squirrels is seeded with records from the 2018 Central Park Squirrel Census dataset downloaded from the NYC Open Data portal. The dataset, downloaded as a CSV file, contains the fields obs_date (observation date) and lat_lng (coordinates of observation) with values that are not compatible with the PostgreSQL data types DATE and POINT respectively. Instead of directly copying the contents of the CSV file to the cp_squirrels table, copy from the output of a GNU awk ("gawk") script. This script... ( db/create.sql ) Upon the initialization of the PostgreSQL database container, this SQL file is ran by adding it to the docker-entrypoint-initdb.d directory. ( db/Dockerfile ) This server exposes a RESTful API with two endpoints: GET /tables and POST /api/records . The GET /tables endpoint simply calls the db.getAllTables method, and the POST /api/records endpoint retrieves data from the PostgreSQL database based on a query object sent within the incoming request. To bypass CORS restrictions for clients hosted on a different domain (or running on a different port on the same machine) sending requests to this server, all responses must have the Access-Control-Allow-Origin header set to the allowable domain ( process.env.CLIENT_APP_URL ) and the Access-Control-Allow-Headers header set to Origin, X-Requested-With, Content-Type, Accept . ( api/index.js ) Notice that the Express.js server requires three environment variables: CLIENT_APP_URL , PORT and DATABASE_URL . These environment variables must be added to Heroku, which we will do later on in this post. The Dockerfile for the Express.js server instructs how to build the server's Docker image based on its needs. It automates the process of setting up and running the server. Since the server must run within a Node.js environment and relies on several third-party dependencies, the image must be built upon the node base image and install the project's dependencies before running the server via the npm start command. ( api/Dockerfile ) However, because the filesystem of a Heroku dyno is ephemeral , volume mounting is not supported. Therefore, we must create a new file named Dockerfile-heroku that is dedicated only to the deployment of the application to Heroku and not reliant on a volume. ( api/Dockerfile-heroku ) Unfortunately, you cannot deploy a multi-container Docker application via Docker Compose to Heroku. Therefore, we must deploy the Express.js server to a web dyno with Docker and separately provision a PostgreSQL database via Heroku Postgres add-on . To deploy an application with Docker, you must either: For this tutorial, we will deploy the Express.js server to Heroku by building a Docker image with heroku.yml and deploying this image to Heroku. Let's create a heroku.yml manifest file inside of the api subdirectory. Since the Express.js server will be deployed to a web dyno, we must specify the Docker image to build for the application's web process, which the web dyno belongs to: ( api/heroku.yml ) Because our api/Dockerfile already has a CMD instruction, which specifies the command to run within the container, we don't need to add a run section. Let's add a setup section, which defines the environment's add-ons and configuration variables during the provisioning stage. Within this section, add the Heroku PostgreSQL add-on. Choose the free " Hobby Dev " plan and give it a unique name DATABASE . This unique name is optional, and it is used to distinguish it from other Heroku PostgreSQL add-ons. Fortunately, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable, which contains the database connection information for this newly provisioned database, will be made available to our application. Check if your machine already has the Heroku CLI installed. If not yet installed, then install the Heroku CLI. For MacOSX, it can be installed via Homebrew: For other operating systems, follow the instructions here . After installation, For the setup section of the heroku.yml manifest file to be recognized and used for creating a Heroku application, switch to the beta update channel and install the heroku-manifest plugin: Without this step, the PostgreSQL database add-on will not be provisioned from the heroku.yml manifest file. You would have to manually provision the database via the Heroku dashboard or heroku addons:create command. Once installed, close out the terminal window and open a new one for the changes to take effect. Note : To switch back to the stable update stream and uninstall this plugin: Now, authenticate yourself by running the follow command: Note : If you want to remain within the terminal, as in entering your credentials directly within the terminal, then add the -i option after the command. This command prompts you to press any key to open a login page within a web browser. Enter your credentials within the login form. Once authenticated, Heroku CLI will automatically log you in. Within the api subdirectory, create a Heroku application with the --manifest flag: This command automatically sets the stack of the application to container and sets the remote repository of the api subdirectory to heroku . When you visit the Heroku dashboard in a web browser, this newly created application is listed under your "Personal" applications: Set the configuration variable CLIENT_APP_URL to a domain that should be allowed to send requests to the Express.js server. Note : The PORT environment variable is automatically exposed by the web dyno for the application to bind to. As previously mentioned, once the PostgreSQL database is provisioned, the DATABASE_URL environment variable will automatically be exposed. Under the application's "Settings" tab in the Heroku Dashboard, you can find all configuration variables set for your application under the "Config Vars" section. Create a .gitignore file within the api subdirectory. ( api/.gitignore ) Commit all the files within the api subdirectory: Push the application to the remote Heroku repository. The application will be built and deployed to the web dyno. Ensure that the application has successfully deployed by checking the logs of this web dyno: If you visit https://<application-name>.herokuapp.com/tables in your browser, then a successful response is returned and printed to the browser. In case the PostgreSQL database is not provisioned, manually provision it using the following command: Then, restart the dynos for the DATABASE_URL environment variable to be available to the Express.js server at runtime. Deploy your own containerized applications to Heroku!

Thumbnail Image of Tutorial Deploying a Node.js and PostgreSQL Application to Heroku