3D computer room data center visualization based on HTML5 WebGL and VR technology

3D computer room data center visualization based on HTML5 WebGL and VR technology

Preface

In 3D computer room data center visualization applications, with the continuous popularization and development of video surveillance networking systems, network cameras are more used in surveillance systems, especially the advent of the high-definition era, which has accelerated the development and application of network cameras.

While the number of surveillance cameras continues to grow, the surveillance system is faced with severe status quo problems: massive amounts of video are scattered, isolated, incomplete viewing angles, unclear locations, and other issues that always surround users. Therefore, how to manage cameras and control video dynamics more intuitively and clearly has become an important topic for enhancing the value of video applications. Therefore, the current project came into being from the perspective of solving this status quo problem. Focusing on how to improve, manage and effectively use the massive information collected by front-end equipment to serve public safety, especially under the general trend of technology fusion, how to combine the current advanced video fusion, virtual and real fusion, 3D dynamics and other technologies to realize real-time dynamic visualization of 3D scenes Surveillance, more effective identification, analysis, and mining of massive data effective information service public applications, has become the trend and direction of the visual development of video surveillance platforms. At present, in the surveillance industry, leaders in the surveillance industry such as Haikang and Dahua can plan the layout of cameras in public places and parks based on this method, and adjust the system through the camera parameters of camera brands such as Haikang and Dahua. The visual range and monitoring direction of the middle camera model make it more convenient for people to intuitively understand the monitoring area and monitoring angle of the camera.

The following is the project address: WebGL custom 3D camera monitoring model based on HTML5

Effect preview

Overall scene-camera renderings

Partial scene-camera renderings

Code generation
camera model and scene

The camera model used in the project is generated by 3dMax modeling. The modeling tool can export obj and mtl files. In HT, the camera model in the 3d scene can be generated by parsing the obj and mtl files.

The scenes in the project are built by HT's 3d editor. Some models in the scene are modeled by HT, and some are modeled by 3dMax, and then imported into HT. The white lights on the ground in the scene are carried out by HT's 3d editor. The effect of the ground texture.

Cone modeling

The 3D model is composed of the most basic triangular faces. For example, a rectangle can be composed of 2 triangles, and a cube can be composed of 6 faces or 12 triangles. By analogy, a more complex model can be composed of many small triangles. synthesis. Therefore, the 3D model definition is the description of all the triangles that construct the model, and each triangle is composed of three vertices, each vertex is determined by the three-dimensional coordinates of x, y, z, and HT uses the right-handed spiral rule to determine the three vertices. The vertices construct the front face of the triangular face.

In HT,  a custom 3D model can be registered through the  ht.Default.setShape3dModel(name, model) function, and the cone generated in front of the camera is generated by this method. The cone can be regarded as composed of 5 vertices and 6 triangles. The specific figure is as follows:

ht.Default.setShape3dModel(name, model)

1.  name  is the model name. If the name is the same as the predefined one, the predefined model will be replaced 
. 2. The  model  is a JSON type object, where  vs  represents the vertex coordinate array, is  represents the index array, and uv  represents the texture coordinate array, if you want To define a surface separately, you can define it through  bottom_vs, bottom_is, bottom_uv, top_vs, top_is, top_uv,  etc., and then you can   control a surface separately through  shape3d.top.*, shape3d.bottom.*, etc.

Here is my code to define the model:

//camera  
//fovy   tan  
var setRangeModel = function(camera, fovy) {
    var fovyVal = 0.5 * fovy;
    var pointArr = [0, 0, 0, -fovyVal, fovyVal, 0.5, fovyVal, fovyVal, 0.5, fovyVal, -fovyVal, 0.5, -fovyVal, -fovyVal, 0.5];
    ht.Default.setShape3dModel(camera.getTag(), [{
        vs: pointArr,
        is: [2, 1, 0, 4, 1, 0, 4, 3, 0, 3, 2, 0],
        from_vs: pointArr.slice(3, 15),
        from_is: [3, 1, 0, 3, 2, 1],
        from_uv: [0, 0, 1, 0, 1, 1, 0, 1]
    }]);
} 

I use the tag tag value of the current camera as the name of the model. The tag tag is used to uniquely identify a graphic element in the HT, and the user can customize the tag value. The coordinate information of the five vertices of the current pentahedron is recorded through pointArr, and the bottom surface of the pentahedron is constructed separately through from_vs, from_is, and from_uv in the code, and the bottom surface is used to display the image presented by the current camera.

The wf.geometry  property of the cone style object is set in the code  . With this property, the wireframe of the model can be added to the cone to enhance the three-dimensional effect of the model, and  the color of the wireframe can be adjusted by parameters such as  wf.color and wf.width . Thickness and so on.

The setting code for the style attribute of the related model is as follows:

 1 rangeNode.s({
 2     'shape3d': cameraName,
 3    // 
 4     'shape3d.color': 'rgba(52, 148, 252, 0.3)',
 5    // 
 6     'shape3d.reverse.flip': true,
 7    // 
 8     'shape3d.light': false,
 9    // 
10     'shape3d.transparent': true,
11    // 
12     '3d.movable': false,
13    // 
14     'wf.geometry': true// 
15 }); 

Principle of camera image generation

Perspective projection

Perspective projection is a method of drawing or rendering on a two-dimensional paper or canvas plane in order to obtain a visual effect close to a real three-dimensional object. It is also called a perspective view. Perspective makes distant objects smaller, near objects larger, and parallel lines will appear first to cross the visual effects that are closer to human observation.

As shown in the figure above, the content that the perspective projection finally displays on the screen is only the content of the frustum (View Frustum), so Graph3dView provides eye, center, up, far, near, fovy and aspect parameters to control the frustum. The specific scope of the body. For specific perspective projection, please refer to   the  3D  manual of HT for Web .

According to the description in the above figure, in this project, after the camera is initialized, the current 3d scene eyes position and the center center position can be cached, and then the 3d scene eyes eyes and center center can be set to the position of the camera center point, and then Take a screenshot of the current 3d scene at this moment, which is the monitoring image of the current camera, and then set the center and eyes of the 3d scene to the eyes and center positions cached at the beginning. This method can be used to achieve any position in the 3d scene Snapshots to achieve real-time generation of camera monitoring images.

The relevant pseudo code is as follows:

 1 function getFrontImg(camera, rangeNode) {
 2     var oldEye = g3d.getEye();
 3     var oldCenter = g3d.getCenter();
 4     var oldFovy = g3d.getFovy();
 5     g3d.setEye( );
 6     g3d.setCenter( );
 7     g3d.setFovy( );
 8     g3d.setAspect( );
 9     g3d.validateImp();
10     g3d.toDataURL();
11     g3d.setEye(oldEye);;
12     g3d.setCenter(oldCenter);
13     g3d.setFovy(oldFovy);
14     g3d.setAspect(undefined);
15     g3d.validateImp();
16 } 

After testing, the image acquisition by this method will cause the page to freeze, because it is to obtain the overall screenshot of the current 3d scene. Since the current 3d scene is relatively large, toDataURL is very slow to obtain image information, so I The off-screen method is adopted to obtain the image, the specific method is as follows:
   1. Create a new 3d scene, set the width and height of the current scene to 200px, and the content of the current 3d scene is the same as the scene on the main screen , HT uses new ht.graph3d.Graph3dView(dataModel) to create a new scene, where the dataModel is all the primitives of the current scene, so the main screen and the off-screen 3d scene share the same dataModel to ensure the consistency of the scene.
   2. Set the newly created scene position to a place that cannot be seen on the screen and add it to the dom.
   3. Change the previous operation of obtaining images from the main screen to the operation of obtaining images off-screen. At this time, the size of the off-screen image is much smaller than that of the previous main screen, and the original eye positions do not need to be saved for off-screen acquisition As well as the position of the center center, because we did not change the positions of the eyes and center of the main screen, the overhead caused by switching is also reduced, which greatly improves the speed of the camera to obtain images.

The following is the code implemented by this method:

 1 function getFrontImg(camera, rangeNode) {
 2    // 
 3     rangeNode.s('shape3d.from.visible', false);
 4     rangeNode.s('shape3d.visible', false);
 5     rangeNode.s('wf.geometry', false);
 6     var cameraP3 = camera.p3();
 7     var cameraR3 = camera.r3();
 8     var cameraS3 = camera.s3();
 9     var updateScreen = function() {
10         demoUtil.Canvas2dRender(camera, outScreenG3d.getCanvas());
11         rangeNode.s({
12             'shape3d.from.image': camera.a('canvas')
13         });
14         rangeNode.s('shape3d.from.visible', true);
15         rangeNode.s('shape3d.visible', true);
16         rangeNode.s('wf.geometry', true);
17     };
18 
19    // 
20     var realP3 = [cameraP3[0], cameraP3[1] + cameraS3[1]/2, cameraP3[2] + cameraS3[2]/2];
21    // 
22     var realEye = demoUtil.getCenter(cameraP3, realP3, cameraR3);
23 
24     outScreenG3d.setEye(realEye);
25     outScreenG3d.setCenter(demoUtil.getCenter(realEye, [realEye[0], realEye[1], realEye[2] + 5], cameraR3));
26     outScreenG3d.setFovy(camera.a('fovy'));
27     outScreenG3d.validate();
28     updateScreen();
29 } 

 

There is a getCenter method in the above code to get the position of point A in the 3d scene obtained after point A rotates the angle angle around point B in the 3d scene. The method uses the method below ht.Math encapsulated by HT, as follows: For the code:

 1//pointA   pointB  
 2//pointB  
 3//r3   [xAngle, yAngle, zAngle]   x, y, z   
 4 var getCenter = function(pointA, pointB, r3) {
 5     var mtrx = new ht.Math.Matrix4();
 6     var euler = new ht.Math.Euler();
 7     var v1 = new ht.Math.Vector3();
 8     var v2 = new ht.Math.Vector3();
 9 
10     mtrx.makeRotationFromEuler(euler.set(r3[0], r3[1], r3[2]));
11 
12     v1.fromArray(pointB).sub(v2.fromArray(pointA));
13     v2.copy(v1).applyMatrix4(mtrx);
14     v2.sub(v1);
15 
16     return [pointB[0] + v2.x, pointB[1] + v2.y, pointB[2] + v2.z];
17 }; 

Part of the knowledge applied to the vector here is as follows:

OA + OB = OC

The method is divided into the following steps:

   1.   var mtrx = new ht.Math.Matrix4()  creates a conversion matrix and  gets around r3[0] by  mtrx.makeRotationFromEuler(euler.set(r3[0], r3[1], r3[2])) , R3[1], r3[2] is the rotation matrix of x-axis, y-axis, and z-axis rotation.
   2.  Create two vectors v1 and v2 through  new ht.Math.Vector3() .
   3.  v1.fromArray(pointB)  is to create a vector from the origin to pointB.
   4.  v2.fromArray(pointA)  is to create a vector from the origin to pointA.
   5.  v1.fromArray(pointB).sub(v2.fromArray(pointA))  is the vector OB-OA. The vector obtained at this time is AB, and v1 becomes the vector AB.
   6. The  v2.copy(v1)  v2 vector copies the v1 vector, and then  applies a rotation matrix to the v2 vector through  v2.copy(v1).applyMatrix4(mtrx) . After the transformation, the v1 vector rotates around the pointA vector v2.
   7. Pass v2.sub(v1) at this time  The starting point is pointB, and the end point is the vector formed by the points after the rotation of pointB. This vector is now v2.
   8. The point after the rotation is obtained by the vector formula as [pointB[0] + v2.x, pointB[1] + v2.y, pointB[2] + v2.z].

The 3D scene example in the project is actually  the VR example of  Hightopo's recent Industrial Internet booth at the Guizhou Digital Expo, HT. The public has high expectations for VR/AR, but the road still has to go step by step, even if it has raised $2.3 billion in Magic Leap's first product can only be  Full of Shit , this topic will be expanded later, here is the video photo of the scene at the time:

2d image posted to 3d model

Through the introduction in the previous step, we can obtain a screenshot of the current camera position, so how to paste the current image to the bottom of the pentahedron constructed earlier? Previously, from_vs, from_is was used to construct the bottom rectangle, so in HT, the shape3d.from.image property in the style of the pentahedron can be set to the current image, where the from_uv array is used to define the location of the texture, as shown in the following figure:

The following is the code that defines the position of the texture from_uv:

 1 from_uv: [0, 0, 1, 0, 1, 1, 0, 1] 

from_uv is the array of positions that define the texture. According to the explanation of the above figure, the 2d image can be pasted to the from surface of the 3d model.

control Panel

In HT, the  following panel is generated by  new ht.widget.Panel() :

Each camera in the panel has a module to present the current monitoring image. In fact, this place is also a canvas. This canvas is the same canvas as the monitoring image in front of the cone in the scene. Each camera has its own canvas for saving. The real-time monitoring screen of the current camera, so that the canvas can be pasted anywhere, and the code for adding the canvas to the panel is as follows:

 1 formPane.addRow([{ 2 element: camera.a('canvas') 3 }], 240, 240); 

In the code, the canvas node is stored under the attr attribute of the camera primitive, and then the  current camera picture can be obtained through  camera.a('canvas') .

Each control node in the panel is  added through  formPane.addRow . For details, please refer to the form manual of HT for Web . After that, add the form panel formPane to the panel panel through ht.widget.Panel, please refer to the panel manual of HT for Web for details .

Part of the control code is as follows:

 1 formPane.addRow(['rotateY', {
 2     slider: {
 3         min: -Math.PI,
 4         max: Math.PI,
 5         value: r3[1],
 6         onValueChanged: function() {
 7             var cameraR3 = camera.r3();
 8             camera.r3([cameraR3[0], this.getValue(), cameraR3[2]]);
 9             rangeNode.r3([cameraR3[0], this.getValue(), cameraR3[2]]);
10             getFrontImg(camera, rangeNode);
11         }
12     }
13 }], [0.1, 0.15]); 

The control panel uses  addRow  to add control elements. The above code is to add the control of the camera's rotation around the y-axis. onValueChanged is  called when the value of the slider changes. At this time,  the rotation parameters of the current camera are obtained through  camera.r3() . Rotate around the y axis, so the angle between the x axis and the z axis is unchanged, but the rotation angle of the y axis is changed, so through  camera.r3([cameraR3[0], this.getValue(), cameraR3[2]] )  To adjust the rotation angle of the camera and  to set the rotation angle of the cone in front of the camera by  rangeNode.r3([cameraR3[0], this.getValue(), cameraR3[2]]) , and then call the previously encapsulated  getFrontImg  function to Get the real-time image information under the rotation angle at this time.

In the project, you can set the title background as a transparent background through the configuration parameter titleBackground of the Panel panel  : rgba(230, 230, 230, 0.4)  , and other similar title color, titleHeight and other title parameters can be configured through separatorColor, separatorWidth, etc. Splitting parameters can set the color and width of the dividing line between internal panels. Finally  , the position of the panel is set to the upper right corner through  panel.setPositionRelativeTo('rightTop') , and  the outermost div of the panel is added to the page through  document.body.appendChild(panel.getView())panel.getView( ) Is  used to get the outermost dom node of the panel.

The specific initialization panel code is as follows:

 1 function initPanel() {
 2     var panel = new ht.widget.Panel();
 3     var config = {
 4         title: " ",
 5         titleBackground: 'rgba(230, 230, 230, 0.4)',
 6         titleColor: 'rgb(0, 0, 0)',
 7         titleHeight: 30,
 8         separatorColor: 'rgb(67, 175, 241)',
 9         separatorWidth: 1,
10         exclusive: true,
11         items: []
12     };
13     cameraArr.forEach(function(data, num) {
14         var camera = data['camera'];
15         var rangeNode = data['rangeNode'];
16         var formPane = new ht.widget.FormPane();
17         initFormPane(formPane, camera, rangeNode);
18         config.items.push({
19             title: " " + (num + 1),
20             titleBackground: 'rgba(230, 230, 230, 0.4)',
21             titleColor: 'rgb(0, 0, 0)',
22             titleHeight: 30,
23             separatorColor: 'rgb(67, 175, 241)',
24             separatorWidth: 1,
25             content: formPane,
26             flowLayout: true,
27             contentHeight: 400,
28             width: 250,
29             expanded: num === 0
30         });
31     });
32     panel.setConfig(config);
33     panel.setPositionRelativeTo('rightTop');
34     document.body.appendChild(panel.getView());
35     window.addEventListener("resize",
36     function() {
37         panel.invalidate();
38     });
39 } 

In the control panel, you can adjust the direction of the camera, the radiation range monitored by the camera, the length of the cone in front of the camera, etc., and the image of the camera is generated in real time. The following is a running screenshot:

The following is the operation of the 3D scene used in this project combined with the VR technology of HT for Web: