Comparing functions of Douyin, ins, and WeChat-Story sticker text

Comparing functions of Douyin, ins, and WeChat-Story sticker text

This article is published in the short book-when and when, please indicate the source for reprinting, otherwise the copyright responsibility will be investigated. Exchange qq group: 859640274

GitHub address

Library dependency: implementation'com.whensunset:sticker:0.2'

I haven't updated the blog in the past two months, and it feels like it's gone, haha. Actually I am preparing for a big move, and it takes a long time to prepare for this big move, so please look forward to it. This article can be regarded as a starter of the big move, to fill in the gap that has not been updated for so long. Of course, this article is not just too much. There are a lot of dry goods in it, because the content of my work in recent months is related to this-story text and sticker controls.

Reading notes:

  • 1. Text, ordinary stickers, dynamic stickers, etc. are collectively referred to as- elements
  • 2. There will be some English abbreviations in the back: TextureView TV, RenderThread RT, ViewGroup VG, Instagram ins, ElementContainerView ECV, DecorationElementContainerView DECV, ElementActionListener EAL, WsElement WE, RLECV RuleLineElementContainerView, TECV TrashElementContainerView
  • 3. Douyin, multi-flash-tremble

This article is divided into the following chapters, readers can read as needed:

  • 1. Story product technical analysis-talk about the functions and possible technical implementations of apps that can publish stories on the market.
  • 2. Android-side sticker text structure and implementation-talk about how to implement an android-side text sticker function that integrates the strengths of each family.
  • 3. Imitate a TikTok sticker control-based on the core code in 2, simply implement the sticker control in the TikTok app.

1. Story product technical analysis

First of all, there are many apps on the market that support the shooting and publishing of stories and similar concepts. The originator of foreign stories is Ins. The domestic WeChat moment video, multi-flash video shooting, and Douyin follow-up shooting, etc., are all borrowed from the story of Ins. The analysis in this chapter is also based on the analysis of the above four apps.

1. Product function analysis

The following table is my conclusion after carefully playing the more famous apps that can publish story videos at home and abroad. Let s take a closer look according to the functions of each product.

Instagram Douyin Multi flash Wechat
Text Yes, the most feature-rich Yes, more functions Yes, less functions Yes, least function
Text enlargement Blur when there is emoji, clear when there is no emoji, zoom in and not freeze No blur, zoom in and freeze No blur, zoom in and freeze A little fuzzy, zoom in and not stuck
Dynamic stickers Yes, only support gif, follow hand Yes, support video format, don t follow Yes, support video format, don t follow Yes, only support gif, follow hand
Function stickers Yes, rich in functions Yes, general function Yes, general function Yes, only location stickers
Normal stickers Yes, very follow-up Yes or no Yes or no Yes, very follow-up
Whether text and stickers can cover each other can Can't Can't can
  • 1. First of all, Ins is regarded as the uncrowned king. After all, the concept of story is that ins brings fire. It can be said that the function of ins is the most complete and refined. If we are looking for a benchmark, then it is ins.
  • 2. From the above picture, we find that the functions of Douyin and Shanduo are very similar. After all, it is the relationship between father and son, so we can analyze these two as one (hereinafter referred to as shaking ). In the process of my experience, there is an experience point ( not listed in the table ) that is more than ins. That is the smoothness of the text editing state switching. The jitter uses a smooth transition animation, and the ins appear and disappear bluntly. There is something here in my opinion, as for what I will point out later in the technical analysis.
  • 3. In this way, WeChat seems to have a kind of self-confidence. Whether it is function or experience, it is actually inferior to the other three players, but the only commendable point is that the WeChat stickers can use the emoticons deposited by our usual chat. Whether this is considered a dimensionality reduction blow is left to the reader to judge.
  • 4. Let's take a look at the question of how to follow the sticker and whether the text sticker can cover each other.
    • 1. We found that if the sticker only supports gif, it will follow. If the sticker supports the video format, it will not follow.
    • 2. Similarly, if the sticker supports gif, the text and the sticker can cover each other, and vice versa, they cannot cover each other.
    • 3. I will also point out the answers to the two questions mentioned above in the later technical analysis.
  • 5. The last problem is that the text is enlarged and blurred and stuck. The WeChat text will be a little blurry after zooming in, while the flickering will not ( this refers to the edited video, not the post-release video ). WeChat uses a very tricky way to make the text not very fuzzy in the end, that is, to limit the magnification of the text, and the text does not allow to adjust the font size. The problem with flickering is that when the text contains multiple emojis, the zooming will be very lagging and flickering. WeChat does not have this problem. No matter how many emojis are zoomed, the zooming is very smooth. Ins is a special case. It will be blurred when it has an emoji, and it will not be blurred when there is no emoji, and the zoom will not freeze. I will explain this issue in detail during technical analysis.

2. Technical analysis

The birth process of a function is the process of product and technology compromise ( torsion ). So in this section, I will talk about the technical reasons why the four app experiences analyzed in the previous section are not perfect in terms of experience, and also pit holes for the technology behind us.

(1) The dispute between TextureView (SurfaceView) and ViewGroup

Classmates who follow me should know that my last blog post was a full analysis of the SurfaceView family source code. When I knew the need to do this, in fact, the first thing I thought of was to use TV. Because no matter the text or stickers can be drawn on the Surface, the performance does not seem to be bad. But the end result is that I added a few more days to the class and completely refactored the code that uses TV as the base to draw the container. Thousands of words are combined into a poem: 10.thousand lines of code, think of the first line. The structure is unclear, and I work overtime until dawn. Then I will talk about the advantages and disadvantages of TV and VG as the basic drawing container:

  • 1. TV advantages:
    • 1. The drawing logic is clear, and the drawing process can be controlled manually.
    • 2. It seems to be gone. . .
  • 2. VG's advantages:
    • 1. There are a large number of ready-made controls that can be combined, and these combinations can basically meet all our needs. Can be convenient function stickers the development of .
    • 2. entire event distribution process can be used to facilitate element in response to events.
    • 3. In the case where there is already a TV (for example, when editing the video, use TV to play), the refresh of VG has little effect on RT. TV will increase the load of RT. The intuitive experience here is: when zooming and moving elements, the video playback will be very lagging, the reason is that our TV refresh preempts the cpu time of the TV playing the video (this is also the reason why I finally gave up TV)
    • 4. Use VG we can use a variety of animation to optimize the user experience, so that the elements of the state switch very smooth. Example is shaking flash animated text editing mode switching.
    • 5. We don't need to write all kinds of drawing logic with canvas ourselves. God and I could understand the code when it was written, but only God could understand it after a few months. In Zhihu's words, this kind of code is- Shishan .
  • 3. In fact, we have compared so many and found that most of the benefits of VG are given by the android framework layer. If we use TV to achieve it, we just recreate a wheel full of loopholes. From the comparison of various aspects such as construction period, user experience, code scalability, etc., VG is a complete TV explosion. Please forgive me for making the stupid choice of TV two months ago. Dear reader, if you think I helped you through this big hole, then hurry up and pay attention to my WeChat public account: interesting things in the world. A lot of dry goods are waiting for you to see.

(2). How to display dynamic stickers

From the previous comparison, we know that there is an irreconcilable contradiction between whether to support the resources of the video format and whether it is with the hand. Ins and WeChat chose to follow the hand, while Shake Flash chose to support video format resources. Next, we will analyze the technical principles and reasons for the trade-offs.

  • 1. First of all, we have to know that in order to support the display of multiple dynamic stickers with video resources, it is very stupid to display multiple video playback windows on the framework layer. Because generally speaking, our background is the video player, we can integrate the video resources of multiple dynamic stickers into the video player through the capabilities of the native layer. In other words, there is always only one video player, and the resources of dynamic stickers are handed over to the player to play. Of course, even if such a video player is open source, it needs to be tailored according to its own functions. In the four apps, this solution is selected for jitter. We can simply judge the capabilities of this player:
    • 1. Able to play ordinary videos (this is nonsense)
    • 2. Ability to perform operations such as shifting, zooming, and rotating the video
    • 3. The player can add multiple sub-videos while playing the video, and the sub-videos also support functions such as displacement, rotation, zooming, and so on.
    • 4. The various information of the sub video can be changed in real time during the main video playback. The important thing is that the performance should not be too bad. The current situation like flickering is a trial on the edge of unacceptable users.
  • 2. Both ins and WeChat chose to follow the hands, then it is obvious that their implementation is to display the gif/webp resources on the view in the framework layer, then follow the hands is a matter of course. As for what control can display gif and webp pictures? Of course it is Fresco , which also happens to be produced by FaceBook.
  • 3. Now we know that resources that support video formats are actually much more difficult than gif. If only gif is supported, I can make this function independently. Once the video format is supported, it is currently impossible for me to do it alone (of course, I should be able to do it after the development of our video editing SDK is completed later). So what are the benefits of resources that support video formats? Let me enumerate
    • 1. Be able to finely control the display range of dynamic stickers, because we can't control the gif of the framework layer. If it is a video resource, the native layer can control the progress of the video, the playback area and other attributes.
    • 2. The video format is more extensible than gif, and the display screen is more refined.
  • 4. In fact, there is another disadvantage of the realizing method of shaking and flashing: text and stickers cannot cover each other, because stickers are always rendered in the video, and text is displayed in view. The sticker will always be below the text on the z-axis.

(3). Controversy over the display mode of the text

If the readers have seen through (1) and (2), then I believe you already know in your heart what kind of methods the four apps adopt to display text. I will simply analyze it here:

  • 1. There is no doubt that the four apps all use VG as the basic drawing container. Because ins and WeChat support gif, it goes without saying that they must use view to display gif. Although the stickers are all rendered by the player, they have various functional stickers, and the combination of these stickers can only be combined using the view. Otherwise, the code is really unmaintainable. I have personally experienced this kind of code.
  • 2. So now the problem is coming. The same is to use view to display text. Why are the final performances of jitter, ins, and WeChat different? A key point here is: the type of view.
    • 1. The first thing we can confirm is that after the text editing is completed, WeChat will obtain a screenshot of the EditText view. Finally, the zoom and displacement rotation on the interface is an ImageView-like image, which explains the phenomenon of text zooming and blurring. And WeChat is "smart" limits the multiples of text zoom, so that users won't feel the text is blurred in the end.
    • 2. Shake flash is a family, so they display the text in the same way, using EditText to display the edited text. That is to say, the view that zooms and rotates on the interface is still EditText. The advantage of this is obvious, even if the user puts the text in a large display, it is still very clear. But I also said that there is a flaw in this scheme: EditText has more emojis and the magnification is larger, the operation will be very stuck, and sometimes there will be a phenomenon of flashing screen.This should be a bug in EditText itself. If Google doesn't understand this bug, it will keep it.
    • 3.ins combines these two schemes. In the case of emoji, it is transformed into a screenshot using ImageView to display text, and in the case of no emoji, EditText is used to display text. This is also one of the reasons why ins is called the uncrowned king. It takes care of various user experience details and tries its best to give users the best experience. However, in the end, which of the four solutions is best left to readers and users to judge.
    • 4. I mentioned earlier that jitter is better than ins in switching the text editing state, because they use animation to switch. Because WeChat uses ImageView to display text screenshots, it is not easy to do this animation and it is understandable. But ins should have a way to do this animation. My personal feeling may be to make the user experience the same when there is emoji and not to do this animation.

(4). View's zoom displacement dispute

We all know that there are two ways to change the size and position of the view in Android. One is to change the real attributes in the view LayoutParam, and the other is to set the scale and translation of the view. Let's talk about the characteristics of these two methods. Of course, in the end, both of our implementation schemes will have

  • 1. Change the LayoutParam to change the characteristics of the view:
    • 1. The content in the view is always the size originally defined. For example, if there is text in the view, the font size of the text will not change.
    • 2. If the view is a VG, then it will re-layout.
    • 3. The event distribution can be carried out more conveniently, for example, in my current implementation, accurate event distribution can be carried out in this mode.
  • 2. Use scale and translation to change the characteristics of the view:
    • 1. The content in the view can be directly zoomed in and out. This feature is suitable for most of our demand scenarios.
    • 2. The view will not re-execute measure, layout and draw. The performance seems to be better than the previous method.
    • 3. Event distribution can also be carried out, but it should be a bit tricky. At present, I cannot achieve accurate event distribution in this mode. It may be a problem with my implementation.

2. The architecture and implementation of the sticker text control on the Android side

1. Architecture

We first talk about the architecture of the first text stickers controls to achieve, I will based on the following Figure 1 code on github and explain. I suggest you clone the code, and of course don't forget to give a star.

Text Sticker Architecture.jpg

Let's first talk about the architecture of the entire control according to Figure 1.

  • 1. Let's look at the overall situation first:
    • 1. We analyzed in the previous chapter that the drawing container of the entire control should be a VG. So the ElementContainerView in the figure is just such a container, briefly summarize it has these functions:
      • 1. Handling various gesture events, the gestures here include single finger and double finger.
      • 2. Add and delete some views. The view here is used to draw various elements.
      • 3. Provide some api so that the external can control the view.
      • 4. Provide a listener, so that the outside can monitor the internal process.
    • 2. With the drawing container, we need to add views to the drawing container. The view needs various data in the process of user operation, so here I use WE to encapsulate the view that needs to be displayed, and there are the following things inside:
      • 1. Data required during various user operations such as scale, rotate, x, y, etc.
      • 2. There are some ways to update the view through data.
      • 3. Provide some apis so that ECV can manipulate the view in WE.
    • 3. ECV and WE can continue to inherit a variety of extended controls.
  • 2. After the whole presentation, we can talk about the process in the picture carefully
    • 1. First talk about the horizontal arrow: external/internal call , external call ECV to perform operations such as adding, deleting, modifying and checking WE will enter this path. This path can have the following operations:
      • 1.addElement: Add an element to ECV.
      • 2. deleteElement: delete an element from ECV.
      • 3.update: Let the view in WE refresh the state according to the current data.
      • 4. findElementByPosition: Find the top WE under the incoming coordinates.
      • 5.selectElement: Select a WE and transfer it to the top level.
      • 6.unSelectElement: Unselect a WE.
    • 2. Let's talk about the vertical arrow: gesture event flow , here will experience some internal logic in the middle, we will talk about it later, the final event flow will trigger the following series of behaviors:
      • 1. The whole process of single-finger movement: When we select a WE, we can move it. Here the movement can be divided into beginning, in progress, and end. Each event will call the corresponding method of WE to update its internal data and then update the view.
      • 2. The whole process of two-finger rotating and zooming: When we select a WE, we can use two fingers to zoom and rotate it. This can be divided into beginning, in progress, and end. Here, the corresponding method of WE will also be called to update the data and then update the view.
      • 3. Click on the selected element again: When we select a WE, we can click on it again. Because WE represents a view, we can directly hand the event to the view to trigger various internal responses. Of course, we can also add a VG as a WE drawing view. At this point, we can give the click event to the VG, and it can continue to distribute the event to the child views. Note: Because ECV needs to receive mobile events, currently only click events can be distributed.
      • 4. Click the blank area: When we don't click any WE, we can perform some operations, such as clearing the current selected state of WE. This behavior can be inherited and can be overridden by subclasses.
      • 5. onFling: This is a "throwing" gesture, which can be used to achieve some fun behaviors, such as letting WE slide a distance when the finger is raised. This behavior is also inheritable and can be overridden by subclasses.
      • 6. Sub-type events: We actually feel that there are fewer events triggered from the above. Therefore, the three methods downSelectTapOtherAction, scrollSelectTapOtherAction, and upSelectTapOtherAction will be called first when down, move, and up. These three methods can be overridden by subclasses. If it returns true, it means that the event has been consumed, and ECV will not trigger other events. In this way, the sub-category can also extend the gestures, such as holding down a place and zooming with one finger.
      • 7. ECV in my picture also implements a subclass DECV, this class simply adds two gestures:
        • 1. Single-finger movement zoom: similar to the vibrato snap, you can use drag to zoom and rotate the element when you press the lower right corner of the element.
        • 2. Delete: Similar to Douyin, you can delete the element directly when you click the upper left corner of the element.
    • 3. There is a feature in Figure 1 that is actually not drawn because it can't be drawn, that is: almost all ECV behaviors in 1 and 2 can be monitored externally, and ElementActionListener is the interface responsible for monitoring. There is an EAL set collection in ECV, so multiple listeners can be added.

2. Technical point realization

When I was developing the entire control, I encountered a lot of technical difficulties, so this section will choose some to talk about, so that the reader will not be particularly confused when looking at the source code.

(1). Define the data structure and draw the coordinate system

----- 1----- com.whensunset.sticker.WsElement

public int mZIndex = -1;// 
  
  protected float mMoveX;//  mElementContainerView    
  
  protected float mMoveY;//  mElementContainerView    
  
  protected float mOriginWidth;// 
  
  protected float mOriginHeight;// 
  
  protected Rect mEditRect;// 
  
  protected float mRotate;// 
  
  protected float mScale = 1.0f;// 
  
  protected float mAlpha = 1.0f;// 
  
  protected boolean mIsSelected;// 
  
  @ElementType
  protected int mElementType;// 
  
 //Element   mElementShowingView   View  Element   view
  protected ElementContainerView mElementContainerView;
  
  protected View mElementShowingView;//  view
  
  protected int mRedundantAreaLeftRight = 0;// 
  
  protected int mRedundantAreaTopBottom = 0;// 
  
 //  showing view      
  protected boolean mIsResponseSelectedClick = false;
  
 //  showing view   height width   scale   rotate   view
  protected boolean mIsRealUpdateShowingViewParams = false; 

The function does not move the data first. The data structure is a very core thing in the framework. Defining a good data structure can save a lot of unnecessary code. So in this section, let s define the data structure and view drawing coordinate system according to code block 1.

  • 1. We use the ECV where the WE is located as the drawable area of the view in the WE, and mEditRect in code block 1 is the rectangle represented by this area. So mEditRect is generally [0, 0, ECV.getWidth, ECV.getHeight] , and the unit of mEditRect is px .

  • 2. The origin of the coordinate system we defined is at the center of mEditRect, which is the center of ECV. mMoveX and mMoveY respectively represent the distance of the view from the origin of the coordinate system. Because both of them are 0 by default, the default position is usually at the center of the ECV when the view is added to the ECV. The unit of these two parameters is px .

  • 3. Our coordinate system has a z-axis, mZIndex is the coordinate of the z-axis, and the z-axis represents the stacking relationship of the view. When mZIndex is 0, it means that the view is on the top of the ECV. The default mZindex is -1, which means that the view is not added to the ECV. mZIndex is an integer .

  • 4. We define mRotate as the timing when the view rotates clockwise, and the interval of mRotate is [-360,360].

    5. We define that mScale is 1 when the view is not zoomed, and when mScale is 2, it means that the view is zoomed in by 2 times, and so on.

  • 6. mOriginWidth and mOriginHeight are the initial size of the view, the unit is px .

  • 7. mAlpha is the transparency of the view, the default is 1 and less than or equal to 1.

  • 8. There is no need to explain the remaining parameters, there are comments in the code.

(2) How is the View in WE updated

From the previous analysis, we know that in the process of ECV processing gestures, various data in WE will be continuously updated. After the data is updated, WE.update will be called to refresh the state of the view. Let's use code block 2 to briefly analyze the two view refresh methods we support:

----- 2----- com.whensunset.sticker.WsElement#update

  public void update() {
    if (isRealChangeShowingView()) {
      AbsoluteLayout.LayoutParams showingViewLayoutParams = (AbsoluteLayout.LayoutParams) mElementShowingView.getLayoutParams();
      showingViewLayoutParams.width = (int) (mOriginWidth * mScale);
      showingViewLayoutParams.height = (int) (mOriginHeight * mScale);
      if (!limitElementAreaLeftRight()) {
        mMoveX = (mMoveX < 0 ? -1 * getLeftRightLimitLength() : getLeftRightLimitLength());
      }
      showingViewLayoutParams.x = (int) getRealX(mMoveX, mElementShowingView);
      
      if (!limitElementAreaTopBottom()) {
        mMoveY = (mMoveY < 0 ? -1 * getBottomTopLimitLength() : getBottomTopLimitLength());
      }
      showingViewLayoutParams.y = (int) getRealY(mMoveY, mElementShowingView);
      mElementShowingView.setLayoutParams(showingViewLayoutParams);
    } else {
      mElementShowingView.setScaleX(mScale);
      mElementShowingView.setScaleY(mScale);
      if (!limitElementAreaLeftRight()) {
        mMoveX = (mMoveX < 0 ? -1 * getLeftRightLimitLength() : getLeftRightLimitLength());
      }
      mElementShowingView.setTranslationX(getRealX(mMoveX, mElementShowingView));
      
      if (!limitElementAreaTopBottom()) {
        mMoveY = (mMoveY < 0 ? -1 * getBottomTopLimitLength() : getBottomTopLimitLength());
      }
      mElementShowingView.setTranslationY(getRealY(mMoveY, mElementShowingView));
    }
    mElementShowingView.setRotation(mRotate);
    mElementShowingView.bringToFront();
  } 
  • 1. Set the real parameters of the view to update the view: In code block 2, we see that there is a flag to distinguish the two view update methods. This method is also very simple, because our ECV is inherited from AbsoluteLayout, so first get the LayoutParam of mElementShowingView and then set the corresponding data in it. There are two things to note here:
    • 1. This method will re-measure, layout, and draw every time
    • 2. In this way, I have successfully implemented event distribution when the view is a VG.
  • 2. Set the canvas parameters of the view to update the view: The second way is to update the view by setting the parameters of the RenderNode at the bottom of the view. We can actually use a simple analogy to scale, rotate, and translate the canvas. There are two things to pay attention to in this way:
    • 1. This method will not update the measures, layout, draw and other methods, and the performance should be better than No. 1.
    • 2. At present, this method can only respond to events when the view is a view. If the view is a VG, the event will be messed up. There is no good solution yet.
  • 3. The above two view update methods have some common points:
    • 1. We all impose a limit on the mMoveX and mMoveY of the view. If the current data exceeds the limit, set these two parameters as the upper and lower limits.
    • 2. Both use setRotation to make the view rotate
    • 3. When the update is over, bringToFront needs to bring the view to the top of the ECV.

(3). How the event is delivered from the ECV to the sub-VG for distribution

First of all, I won t go into details about the event distribution system of android. I will talk about the specific implementation plan in conjunction with code block 3.

----- 3----- com.whensunset.sticker.ElementContainerView

@Override
  public boolean dispatchTouchEvent(MotionEvent ev) {
    if (mSelectedElement != null && mSelectedElement.isShowingViewResponseSelectedClick()) {
      if (ev.getAction() == MotionEvent.ACTION_DOWN) {
        long time = System.currentTimeMillis();
        mUpDownMotionEvent[0] = copyMotionEvent(ev);
        Log.i(DEBUG_TAG, "time:" + (System.currentTimeMillis() - time));
      } else if (ev.getAction() == MotionEvent.ACTION_UP) {
        mUpDownMotionEvent[1] = copyMotionEvent(ev);
      }
    }
    return super.dispatchTouchEvent(ev);
  }
  
  private static MotionEvent copyMotionEvent(MotionEvent motionEvent) {
    Class<?> c = MotionEvent.class;
    Method motionEventMethod = null;
    try {
      motionEventMethod = c.getMethod("copy");
    } catch (NoSuchMethodException e) {
      e.printStackTrace();
    }
    MotionEvent copyMotionEvent = null;
    try {
      copyMotionEvent = (MotionEvent) motionEventMethod.invoke(motionEvent);
    } catch (IllegalAccessException e) {
      e.printStackTrace();
    } catch (InvocationTargetException e) {
      e.printStackTrace();
    }
    return copyMotionEvent;
  }
  
  @Override
  public boolean onInterceptTouchEvent(MotionEvent event) {
    return true;
  }

/**
   *  
   */
  protected void selectedClick(MotionEvent e) {
    if (mSelectedElement == null) {
      Log.w(DEBUG_TAG, "selectedClick edit text but not select ");
    } else {
      if (mSelectedElement.isShowingViewResponseSelectedClick()) {
        mUpDownMotionEvent[0].setLocation(
            mUpDownMotionEvent[0].getX() - mSelectedElement.mElementShowingView.getLeft(),
            mUpDownMotionEvent[0].getY() - mSelectedElement.mElementShowingView.getTop());
        rotateMotionEvent(mUpDownMotionEvent[0], mSelectedElement);
  
        mUpDownMotionEvent[1].setLocation(
            mUpDownMotionEvent[1].getX() - mSelectedElement.mElementShowingView.getLeft(),
            mUpDownMotionEvent[1].getY() - mSelectedElement.mElementShowingView.getTop());
        rotateMotionEvent(mUpDownMotionEvent[1], mSelectedElement);
        mSelectedElement.mElementShowingView.dispatchTouchEvent(mUpDownMotionEvent[0]);
        mSelectedElement.mElementShowingView.dispatchTouchEvent(mUpDownMotionEvent[1]);
      } else {
        mSelectedElement.selectedClick(e);
      }
      callListener(
          elementActionListener -> elementActionListener
              .onSelectedClick(mSelectedElement));
    }
  } 
  • 1. In code block 3, I have selected several important methods. We will explain the plan around these methods in a while. Before that, we need to understand several premises:
    • 1. Why does the ECV handed over to the sub-VG only support the click event ? The reason is very simple, mainly because Move, LongPress, Fling, etc. gestures are all gestures that ECV must consume, and even ECV also needs to consume the event of the first VG click. Therefore, in order to prevent the ECV from conflicting with the sub-VG, the sub-VG can only receive click events.
    • 2. The sub-VG can only receive click events after the sub-VG is selected. The reason is also very simple. When we design the framework, most of the operations on the WsElement are established after the WsElement is selected, and the same is true for the click event.
    • 3. On the basis of 2, some readers will surely think of a question: if I select a WsElement, ECV must process the movement gestures with down gestures, and the click events of sub-VGs also require down gestures. Isn't it still a conflict? I will solve this problem when I explain the code in the next paragraph.
  • 2. So not much gossip, let's analyze code block 3:
    • 1. The first is onInterceptTouchEvent, this method is used to make ECV intercept all gestures passing through it, so that ECV has the highest priority for gesture processing, and only gestures that ECV does not need will be handed over to the child VG, as we said earlier Click event after WsElement is selected.
    • 2. Then there is dispatchTouchEvent. This method is called when the parent view of ECV 2 hands the event to ECV. It is also the starting method for ECV to dispatch events within itself. We can see that the MotionEvent of the up and down events are cloned and stored for later use. Until the notice is: Although the copzy method of MotionEvent is a public method, I don't know from which version the copy method was hidden. So here we can only use reflection to clone MotionEvent. Of course, because this is just the down and up MotionEvent in the sequence of clone events from down to up, there is basically no impact on performance.
    • 3. Finally, the selectedClick method. We mentioned earlier that after selecting the WsElement, the down event is needed for the movement gesture of the ECV and the click event of the sub-VG. So our solution is: the down event is still consumed by the ECV. We manually call the dispatchTouchEvent of the sub-VG twice during the up event and pass the previously stored down and up MotionEvents in turn. In this way, if the VG does not rotate, the event distribution is normal. If the VG rotates, the x and y coordinates in the MotionEvent also need to be rotated by the corresponding angle. Of course, we mentioned earlier that event distribution currently only supports view updates using LayoutParam.

3. A brief analysis of the source code process

In this section, I will mainly explain the flow of the entire source code through a simple demo, so that readers have a simple understanding of the overall operation of the control. This section mainly explains the source code, so readers must go to clone the source code and follow in the footsteps of the article.

(1). Add elements

  • 1. I won't repeat the simple initialization action, let's start with the addTestElement button of MainActivity . After clicking it, a TestElement will be created first . This is the element I used for testing, and the code inside is very simple. Then the unSelectElement and addSelectAndUpdateElement methods are called in turn . unSelectElement is to cancel the currently selected element, this will be analyzed later, let s look at addSelectAndUpdateElement first .
  • 2. addSelectAndUpdateElement is a comparative combination method, which calls addElement , selectElement , update , that is, add elements, select elements, and update elements. Let's analyze one by one::
    • 1. addElement : This method mainly does the following things:
      • 1. Perform data check. If the added WE is empty or the WE is already in the ECV, then the addition fails.
      • 2. In ECV, I maintain a LinkedList of WE, all WEs are stored in it, each time WE is added, WE will be added to the top of the list, and the mZIndex of other WEs will be updated accordingly.
      • 3. Call the WE.add method, which initializes mElementShowingView and adds it to the ECV. The more specific initialization process here will be discussed in detail in a later point.
      • 4. Call the corresponding method of the listener, and call the method that automatically cancels the selection ( ECV can be determined by the outside to automatically cancel the selection ).
    • 2. selectElement : After WE is added, we directly select it here. The code mainly does the following things:
      • 1. Check the data. If the WE that needs to be selected has not been added to the ECV, the selection will fail.
      • 2. Remove the WE that needs to be selected from the list and add it to the top of the list, and then update the mZIndex of other WEs by the way.
      • 3. Call the select method of WE, which mainly updates the data of WE to be selected.
      • 4. Call the corresponding method of the listener.
    • 3. Update : After everything is done before, you need to adjust the WE to its proper state, that is, perform one of the two view update modes we mentioned in the previous section, so I won't go into details here.
  • 3. WE.add : If you look carefully at the WE source code, you will find that the time when mElementShowingView is actually initialized and added to ECV is not when WE is created, but when ECV.addElement is mentioned in 2. This method mainly does the following things:
    • 1. If mElementShowingView has not been initialized, call initView to create a view. InitView is an abstract method subclass that must implement it. We take TestElement as an example, we can see that an ImagaView is created in its initView.
    • 2. After obtaining a view from initview, it will use LayoutParam to add the view to ECV. From here, we can know that: When the mElementShowingView in WE is initialized, both left and right are 0, which means it is in The upper left corner of ECV, the length and width are the mOriginWidth and mOriginHeight set when creating WE .
    • 3. If mElementShowingView has been initialized, then it will be updated here.

(2). Element single-finger gestures

Element gestures do not require external calls like adding elements. Element gestures are triggered by event distribution, so we can start with the ECV.onTouchEvent method

  • 1. When looking at ECV.onTouchEvent, we skip all the previous code and look directly at the last line of the method. GestureDetector is used here , I think many readers have used it, I won't repeat the basic usage. We directly find the addDetector method where it is defined .

  • 2. For the processing of element single-finger gestures, mainly look at three touch events: down, move, and up. So we directly look at the onDown, onScroll, and onSingleTapUp callbacks of GestureDetector .

    • 1. OnDown skips the two-finger gesture and directly enters the singleFingerDown method. The logic inside is as follows:
      • 1. Find the top WE under the current position according to the down position through findElementByPosition .
      • 2. If there is currently selected WE and it is the same as the current touch WE, then first call downSelectTapOtherAction , this function can be overridden by subclasses, and it returns false by default. In other words, the subclass can handle the current event first, and if the subclass handles this event, then return. If the subclass does not handle it, then mark the mMode as SELECTED_CLICK_OR_MOVE , indicating that the final gesture may be a click element or a move element. The specific behavior can only be determined when move or up.
      • 3. If there is currently a selected WE but it is not the same as the currently touched WE, there are two cases: one case is that the touched WE does not exist, at this time it means that the mMode is marked as SINGLE_TAP_BLANK_SCREEN, which means that the blank area of ECV is clicked. . The other case is that the touched WE exists, which means that a WE is reselected.
      • 4. If there is no WE currently selected, there will be two situations: one is that the touched WE does not exist, then it means clicking the blank area as before. Otherwise, select a WE.
    • 2. OnScroll will give priority to the move event to scrollSelectTapOtherAction . This method can also be overridden by subclasses and returns false by default. If the subclass handles this event, it will return directly. Otherwise, when the mMode is SELECTED_CLICK_OR_MOVE (WE has been selected to start moving), SELECT (WE has not been selected to start moving), or MOVE (WE is moving) , the move gesture can be triggered. The specific logic is in singleFingerMove :
      • 1. First call singleFingerMoveStart or singleFingerMoveProcess according to the status of mMode . The corresponding methods of the listener and WE are called in singleFingerMoveStart, and there is basically no logic in it. The corresponding method of monitoring and WE is also called in singleFingerMoveProcess, but the data of mMoveX and mMoveY are updated in the corresponding method of WE.
      • 2. Call update to update the view in WE. Set mMode to MOVE , which means it is moving.
    • 3. In onSingleTapUp , the two-finger gesture is also filtered out first, and then the singleFingerUp method is called :
      • 1. The mMode is SELECTED_CLICK_OR_MOVE . It can be confirmed only here that the user's behavior is a click after selecting an element . We have analyzed the mechanism of event distribution in it before, so I won't repeat it here.
      • 2. mMode is SINGLE_TAP_BLANK_SCREEN , which means to click the blank space of ECV. The onClickBlank called here can also be overwritten by subclasses, which can implement some of its own logic.

    (3). Element two-finger gesture and delete

    Leave the rest to the reader to read the source code. I really can t write it. Leave some energy to imitate the TikTok sticker controls in the last chapter, then see you in the next chapter.

3. imitate a TikTok sticker control

In the last chapter, I will imitate the static stickers of Douyin based on our controls. Of course, not all details will be restored, but it is certain that our imitations will do better than Douyin in some places.

The good news is that I packaged and uploaded the core code in github to JCenter. If readers want to use this package, just add: implementation'com.whensunset:sticker:0.2' to the build.gradle file like normal dependencies. This library will always be maintained, so you can raise more issues. Let's first upload a few functional diagrams:

Figure 2: One finger moves, two fingers rotates to zoom the watermark.gif
Figure 3: Single finger rotate zoom, click to delete watermark.gif
Figure 4: Location auxiliary line watermark.gif
Figure 5: Trash can watermark.gif

1. Features

Let's talk about the features contained in our library in this section.

  • 1. One-finger movement, two-finger rotation zoom, two-finger movement: these functions are directly available in ECV and WE, and Douyin also has them.
  • 2. Decorative frame when selected, single-finger rotation zoom, click to delete: these functions are added on the DECV and DecorationElement layer, and Douyin also has them.
  • 3. Position auxiliary line: This function of ins is very good, and this function of Douyin is very bad. So I imitated ins, RLECV supports this function.
  • 4. Trash bin: This function has both ins and Douyin. The user experience of ins is better, but the ability to imitate ins is limited, so it imitates Douyin. TEV supports this function.
  • 5. Animation effect: This function is ins and vibrato. AnimationElement is the concrete realization class of animation. When I implemented it, I added a sliding effect after onFling to DECV, which is still quite fun, so the experience of our copying should be better.

2. Imitation

In fact, most of the core code is integrated into the library, so we only need to write a little code to imitate most of the functions of the Tik Tok stickers. In some places, we even do better than Tik Tok.

Our test code is in the test moudle of the project on github. You can combine the code to see the next analysis:

  • 1. As we said before, our library contains several ECVs with different functions. From the architecture diagram and the analysis in the previous section, we can know that TECV is the lowest class in the inheritance structure, which contains our upper All the functions listed in this section. So we can use TECV as the container view of the element in activity_main.
  • 2. The layout is defined. Let's look at MainActivity. Here is a very important code Sticker.initialize(this); it is a method that must be called before using this framework, which will initialize some things. This suggestion is called when the App is initialized.
  • 3. Add a TestElement. We have already talked about it in the previous chapter, so I won't go into details here. Let s look at addStaticElement. Clicking here will trigger the addition of a StaticStickerElement , which is the static sticker element.
  • 4. Enter StaticStickerElement to view the code, you will find that it is very simple, because the view used by StaticStickerElement is SimpleDraweeView, so the main code inside is to construct an ImageRequest. I have achieved everything else. Although the code is simple, StaticStickerElement can not only display local pictures, but also network pictures. How do you feel that this library is very simple to use, but the effect is very good?
  • 5. At this point, this blog is almost over ten thousand characters, so more functions in the library are waiting for readers to explore. After a while, I have time to post a document on the use of this library on github, asking for star, fork, and issue.

4. the end

It's another 4D article, I hope everyone will like it. I have been busy recently, and blog updates will not be as stable as before. I hope you can forgive me, but even if I m busy, my articles will be carefully selected technical dry goods. They will not just send out hydrology and create anxiety just to increase the exposure rate. article. The long road is long, let's move forward together.

Serial articles

references