Build nodejs project integration test environment and realize CI process automation

Build nodejs project integration test environment and realize CI process automation

background

In order to perform a more realistic unit test on the interface in the project, the best way is to abandon the mock and embrace the real data environment. How to embrace it?

If we directly use the data in the test environment for testing, not only will it disrupt the normal development and testing process, but the data used in multiple tests may be different. In this case, not only the independence of a single test cannot be guaranteed, but also the subsequent positioning problems will be troublesome.

Following the above thoughts, we can briefly summarize what we want:

1. An independent data environment dedicated to unit testing.

2. And ensure that the data is the same before each test.

Let's consider how to achieve these two requirements.

The first is the data environment. Redis and mysql are used in our project. The most direct thing is that we set up an identical environment locally for testing. But consider that this environment cannot be used only by yourself, but all people involved in project development can also easily use this environment to test their own code. In addition, this test link can be added to the CI process in the future, so that the unit test of the interface can be triggered when the code is submitted, which further guarantees the code quality.

So we chose to use the docker application container to build our data environment, and combined with docker-compose to combine the redis and mysql application containers into a data environment service. In this way, as long as the developer has installed docker and docker-compose, they can start the data environment with a single command. It feels like it is much better than the local one.

The first demand point is solved, let's look at the second one below.

To ensure that the data used in each test is the same, we can export a data file in the test environment as the template data for subsequent tests. In this way, before each test, after starting the data environment, we can import our template data into the database.

The idea ends here, let's get started!

establish a test environment

Before starting, we need to install the two basic services of dokcer and docker-compose. You can learn about the installation through the official website or third-party tutorials, we will not repeat them.

Because we all use public images, what we have to do is to use docker-compose to compose data environment services. Furthermore, we only need to tell docker-compose what to do through a YML configuration file.

I will put out the configured files first, and then briefly explain each configuration item.

1. configuration file infrastructure

1. version

Define version information.

2. services

Define service configuration information.

3. image

Defines the image on which the service is initialized. If the image does not exist locally, it will try to pull the image remotely.

4. container_name

Define the container service name.

5. command

Defines the command to be executed after the container is started. This can override the default execution command defined in the image after the container is started.

6, ports

The mapping port information is defined, and the service port in the container can be mapped to the host. The mapping rule is (host port: container port).

7, volumes

Defines how to map the directory in the physical host to the container (based on the docker-based data volume mechanism), and the mapping rule is (host directory: directory in the container).

8, environment

Define environment variables.

2. Why is this configuration necessary and what problems are solved?

1. Although the port of the redis service in the container has been mapped to the host, why can't it connect?

We tried to use the medis tool on the host to connect to the redis service in the container, but it was not connected at this time. The first thing that comes to mind is not the password issue. Since it is only for testing, and there is no need to consider security issues, can it be set to the secret-free mode?

After searching, it is found that the protected-mode configuration item in the configuration file is set to no to enable the secret-free mode of redis.

At this time, another problem is exposed, how should our configuration file deal with it? Because the container service is one-off, we can't manually add a configuration file every time it starts. So here we use the configuration item of volumes , which can map the host's files to the container. In this way, we can put the configuration file in the project directory and it will be automatically mapped to the container after the service is started.

We then add a startup command through the command configuration item and specify the configuration file for service startup. (Because the redis configuration file is very long, only the download link of the template configuration file is attached .)

redis configuration file:

YML configuration items:

But this problem is not over yet. At this point, you will find that you still cannot connect to the redis service. What is the reason for this?

In fact, there is a bind configuration item in the redis configuration file, the default value is 127.0.0.1, which means that redis is only allowed to connect locally. Therefore, we cannot connect to the redis service in the container in the host environment. The solution is that we just comment out this configuration item.

redis configuration file:

2. "Got a packet bigger than'max_allowed_packet' bytes" fails to import data into the database

It can be seen from the error message that the sql file is too large to exceed the upper limit of import, so we need to add a custom configuration file to modify the upper limit. Similarly, we put this configuration file in the project and map it to the container through the volumes configuration item.

mysql configuration file:

[mysqld]
# added to avoid err "Got a packet bigger than 'max_allowed_packet' bytes"
#
net_buffer_length=1000000 
max_allowed_packet=1000000000
innodb_buffer_pool_size = 2000000000
#
 

YML configuration items

Automate the test process

After the YML file is configured, our data environment is set up. So what hinders our goal of automating the test process? I don't know if you have any questions when solving the last problem, how do you import data into the database?

The answer is that we need to create a database and import template data through scripts before the testing process begins.

1. create a new database

Our project uses knex to connect to the database, but knex needs to specify a library name to connect to when initializing the connection to the database. But what we want is to create a new database and import data into it. What should we do?

After exploration, we can first connect to the initial library that comes with a database. In this way, in the mysql database, we can first connect to the initial database that comes with mysql, and then create a new database through the sql statement, and connect to this database.

Our script is as follows:

const TEST_DB = 'test_db';
const cp = require('child_process');
const Knex = require('knex');
const hasDB = (dbs = [], dbName) => {
  return dbs.some(item => item.Database === dbName);
};

const getDBConnectionInfo = ({
  host = '127.0.0.1',
  port = 6606,
  user = 'root',
  password = '123456',
  database = 'mysql',
}) => ({
  host,
  port,
  user,
  password,
  database,
});

const createDB = async ()=>{
  //  
  let knex = Knex({
    client: 'mysql',
    connection: getDBConnectionInfo({ database: 'mysql' }),
  });
  
  // 
  const dbInfo = await knex.raw('show databases');
  if (hasDB(dbInfo[0], TEST_DB)) {
    await knex.raw(`drop database ${TEST_DB}`);
  }
  
  // 
  await knex.raw(`create database ${TEST_DB}`);
  knex = Knex({
    client: 'mysql',
    connection: getDBConnectionInfo({ database: TEST_DB }),
  });
}
 

2. import template data

The database has been created, we should consider how to import template data into the library (template data is the sql file exported from the test environment).

1. Execute sql file through knex (failure)

The first thing that comes to mind is that all the sql statements in the sql file can be read out and provided to knex for execution. Unfortunately, I did not find a way to make knex execute multiple sql statements, so the first method ended in failure.

2. Import the sql file directly into the library (successful)

The first method fails, we can only choose to import the sql file directly into the library.

Follow this train of thought. 1. we get the hash value that marks the mysql container through the docker filter . Then, execute commands in the container through docker exec to import data.

Our script is as follows:

const createDB = async ()=>{
  //  
  let knex = Knex({
    client: 'mysql',
    connection: getDBConnectionInfo({ database: 'mysql' }),
  });
  
  // 
  const dbInfo = await knex.raw('show databases');
  if (hasDB(dbInfo[0], TEST_DB)) {
    await knex.raw(`drop database ${TEST_DB}`);
  }
  
  // 
  await knex.raw(`create database ${TEST_DB}`);
  knex = Knex({
    client: 'mysql',
    connection: getDBConnectionInfo({ database: TEST_DB }),
  });
  
  let containerHash;
  
  //  hash  
  try {
    containerHash = await execCommand(
      "docker ps --filter 'name=project_database' -q"
    );
  } catch (e) {
    console.log(' docker hash ', e);
  }
	
  // 
  try {
    await execCommand(
      `docker exec -i ${containerHash.replace('\n', '')}/usr/init.sh`
    );
  } catch (e) {
    console.log(' ', e);
  }
  
  // 
  knex.destroy();
}
 

Points to note in the construction of the docker exec command:

  1. The obtained container hash value has a newline character, which will cause the command to fail, so the newline character needs to be removed.

  2. If the mysql command that imports the sql file is directly placed after the docker exec, the command execution will fail due to the priority of the command. So we add an init.sh here, and then map the script file and the sql file to the container together. So as long as the script file is executed, the priority problem is avoided.

The contents of the init.sh file are as follows:

# init.sh 
#!/bin/bash

#  
mysql -uroot -p123456 test_db </usr/test_db.sql
 

YML configuration file:

Finally, we can execute the script before the start of the unit test, and use Jest in our project for unit testing, so put the createDB function in the beforeAll provided by Jest to execute, so as to realize the automatic injection of the template before the test data.

beforeAll(async () => {
  await createDB();
  server = server.start(50000);
});
 

Add to the CI link of gitlab

Achieved test automation, then we can add it to the CI link. Students who are not clear about CI and gitlab's CI configuration process can refer to this article .

Here we will not discuss all the CI configuration links, just show you the main configuration files of gitlab CI.

.gitlab-ci.yml configuration file:

image: docker:stable

services:
  - docker:stable-dind

before_script:
  - apk add --no-cache --quiet py-pip
  - pip install --quiet docker-compose~=1.23.0
  - apk add nodejs npm

test:
  stage: test
  script:
    - npm install --unsafe-perm=true --registry=http://r.cnpmjs.org/
    - nohup docker-compose up & npm run test
 

Because the Runner environment of our gitlab CI is a docker container, we need to use image to declare that our Runner environment is initialized based on the docker image.

In addition, our data environment is also built through docker, so we need to use docker in the docker container to start our data environment. This requires declaring additional services under services. docker:stable-dind is used here. This image can help us create additional container services in the docker container.

Before the job starts, we need to declare under before_script the docker-compose that needs to be installed separately and the nodejs and npm needed to run the project.

The environment is ready, we enter the test job. Install the dependencies first, then start the data environment through docker-compose, and then run jest for unit testing (jest will import the template data into the database before running the unit test).

postscript

So far, the task has been successfully completed. Looking back on the whole process, the road is tortuous, but the harvest is happy. If you have any good ideas, suggestions or questions, please feel free to put them forward.