NAV Navbar
csharp
  • Native Touch API Documentation
  • Touch struct reference
  • NativeTouch
  • Native Touch API Documentation

    Welcome! Please see the How To Use guide to get a general idea of how Native Touch should be used.

    Touch struct reference

    NativeTouchData

    This is a struct for normal mode callback. Below is a list of properties that are available.

    Name iOS Android
    X
    Y
    IntegerX
    IntegerY
    Phase
    Timestamp
    PreviousX
    PreviousY
    PreviousIntegerX
    PreviousIntegerY
    PointerId

    NativeTouchDataFull

    This is a struct for full mode callback. In addition to the same as NativeTouchData mentioned earlier these properties are available for use.

    Name iOS Android
    TapCount
    Type
    Force
    MaximumPossibleForce
    MajorRadius
    MajorRadiusTolerance
    TouchMinor
    TouchMajor
    Pressure
    Size
    Orientation

    ✴️ All platforms

    public float X

    [All platforms] 0 is the left most (same as Unity)

    You get the point in the range of Screen.width/Screen.height as long as you didn't use resolution scaling. If you use (dynamic) resolution scaling, the Screen API is scaled down but native point stays in the range as if it hadn't scaled. Use NativeTouch.RealScreenResolution() to get the original unscaled Screen bounds.

    [iOS] Natively reported as UIKit size. Then it is multiplied by Native Scale factor so that it matches the coordinate system of Unity.

    [Android] It is possible to get fractions. The fraction is for sub-pixel precision, the whole pixel must move. There is no movement of only sub-pixel level (e.g. a series of movements from 255.04 to 255.94 can never happen) . See the official google docs

    public int IntegerX

    [All platforms] 0 is the left most (same as Unity)

    [iOS] A simple convenience cast which likely does not change its value from float version at all.

    [Android] Converted to int by rounding DOWN, discarding the sub-pixel precision.

    public float Y

    [All platforms] 0 is the top (DIFFERENT from Unity)

    You get the point in the range of Screen.width/Screen.height as long as you didn't use resolution scaling. If you use (dynamic) resolution scaling, the Screen API is scaled down but native point stays in the range as if it hadn't scaled. Use NativeTouch.RealScreenResolution() to get the original unscaled Screen bounds.

    [iOS] Natively reported as UIKit size. Then it is multiplied by Native Scale factor so that it matches the coordinate system of Unity.

    [Android] It is possible to get fractions. The fraction is for sub-pixel precision, the whole pixel must move. There is no movement of only sub-pixel level (e.g. a series of movements from 255.04 to 255.94 can never happen) . See the official google docs

    public int IntegerY

    [All platforms] 0 is the the top (DIFFERENT from Unity)

    [iOS] A simple convenience cast which likely does not change its value from float version at all.

    [Android] Converted to int by rounding DOWN, discarding the sub-pixel precision.

    public TouchPhase Phase

    Casted from int to TouchPhase on getter of this property.

    [iOS] Enum signature of Unity matches the native iOS side so it is the same. TouchPhase.Stationary signifying this touch was staying still while the other touch moves. For explanation please see Callback Details

    [Android] You can never get TouchPhase.Stationary because Google Android API does not have one, at the same time TouchPhase.Moved might signify either stationary or really moved. For explanation please see Callback Details

    public double Timestamp

    Here's one thing Native Touch can give you more than Unity's. In Unity we don't have a timestamp indicating when the touch really happens and forced to use the in-frame time even though those touches likely occur out of the frame earlier, and it unnecessary punish the late-pressing player in a game that need timing. (Or vise-versa help players that likes to push early) But this number is a time since phone system start up and cannot be compare to Time.realTimeSinceStartUp of Unity. To convert to Unity time use NativeTouch.GetNativeTouchTime() static method to ask for respective native time, then at that moment you remember the Time.realTimeSinceStartUp in Unity side. You can then be able to relate both times.

    [iOS] Based on ProcessInfo.systemUptime in iOS's API. The unit is SECONDS with sub-millisecond precision, already in double.

    [Android] Based on SystemClock.uptimeMillis(); in Android's API. The unit is MILLISECONDS converted to double. (Originally long)

    Android can report multiple touches per MotionEvent and even touch that stayed still are reported as ACTION_MOVE. But the one staying still also have the same new timestamp copied from the moved/up/down ones (the "main pointer" that cause the action)

    🍎 iOS only

    public float PreviousX

    [iOS] 0 is the left most (same as Unity)

    iOS does not have the pointer ID, therefore you must use previous position to relate the touch movement and see which touch on the previous frame is the same as current one.

    You get the point in the range of Screen.width/Screen.height as long as you didn't use resolution scaling. If you use (dynamic) resolution scaling, the Screen API is scaled down but native point stays in the range as if it hadn't scaled. Use NativeTouch.RealScreenResolution() to get the original unscaled Screen bounds.

    Natively reported as UIKit size. Then it is multiplied by Native Scale factor so that it matches the coordinate system of Unity.

    public float PreviousIntegerX

    [iOS] A simple convenience cast which likely does not change its value from float version at all.

    public float PreviousY

    [iOS] 0 is the top (DIFFERENT from Unity)

    iOS does not have the pointer ID, therefore you must use previous position to relate the touch movement and see which touch on the previous frame is the same as current one.

    You get the point in the range of Screen.width/Screen.height as long as you didn't use resolution scaling. If you use (dynamic) resolution scaling, the Screen API is scaled down but native point stays in the range as if it hadn't scaled. Use NativeTouch.RealScreenResolution() to get the original unscaled Screen bounds.

    Natively reported as UIKit size. Then it is multiplied by Native Scale factor so that it matches the coordinate system of Unity.

    public float PreviousIntegerY

    [iOS] A simple convenience cast which likely does not change its value from float version at all.

    public int TapCount

    [iOS] Read the official documentation

    public int Type

    [iOS] Read the official documentation

    public float Force

    [iOS] Read the official documentation

    public float MaximumPossibleForce

    [iOS] Read the official documentation

    public float MajorRadius

    [iOS] Read the official documentation

    public float MajorRadiusTolerance

    [iOS] Read the official documentation

    public float AltitudeAngle

    [iOS] Read the official documentation

    public float AzimuthAngle

    [iOS] Read the official documentation

    📗 Android only

    public int PointerId

    [Android] Android does not have previous position for each touch, therefore you must use pointer ID to relate the touch movement.

    public float Orientation

    [Android] Read the official documentation

    public float Pressure

    [Android] Read the official documentation

    public float Size

    [Android] Read the official documentation

    public float TouchMajor

    [Android] Read the official documentation

    public float TouchMinor

    [Android] Read the official documentation

    NativeTouch

    This class contains various static methods as an entry point to use Native Touch.

    public static void Start(StartOption startOption = default(StartOption))

    After starting it will call the static callback method you have registered with timing explained in the Callback Details page.

    [Editor] This method is a stub that does nothing.

    public class StartOption

    An input parameter for the NativeTouch.Start() static method.

    public bool fullMode = false;

    If FALSE, The touch data returned will only contains x, y, previous x, previous y, phase, and timestamp. It might be faster since there are way fewer parameters, and the static callback is easier to define. I believe most of us will not use full mode.

    If TRUE, it returns various other touch data. This mode is in BETA and not tested extensively yet, since it would require various devices that can handle pressure, tilt, angle, etc. that I don't currently have.

    public bool disableUnityTouch = false;

    Also disable sending touch to normal Unity input path. Disable all UGUI and Event System functionality consequently.

    Not sure if there are meaningful gain in using this since with this as false I could not benchmark the Unity touch anymore.

    But certainly it free up Unity from processing touch at all.

    If you decided to turn this on, be sure to have a way of calling NativeTouch.Stop() without relying on Unity's touch or you might stuck with Native Touch forever..

    public bool noCallback = false;

    If true, it does not matter if you registered any callbacks or not before using Start(StartOption), Native Touch will let you start.

    In the native side, it will not try to invoke any callback.

    This is for use with ring buffer iteration based API. If you want to exclusively iterate through ring buffer with NativeTouch.touches and not want the callback, it is good to disable the callback completely with this. (On platform like Android IL2CPP and not Mono, the callback is very expensive. Not sure if it is a bug or not.)

    However you can still use both callback and ring buffer iteration API at the same time. Just that previously without any callbacks, Native Touch will not let you Start(StartOption).

    public int ringBufferSize = 150;

    This size will be told to native side where it uses the ring buffer. The memory is only deallocated after a successful Stop.

    public static void Stop()

    Stop calling the static callback on touch.

    [Android] It removes the touch listener from the view.

    [iOS] The touch recognizer is not actually removed, since there is a strange bug that replays all the touches on remove. Instead I just temporartly disable it and returned to only normal Unity path.

    [Editor] This method is a stub that does nothing.

    public static void RegisterCallback(NativeTouchInterface.FullDelegate fullDelegate)

    public static void RegisterCallback(NativeTouchInterface.MinimalDelegate minimalDelegate)

    Register as many callbacks as you want before calling NativeTouch.Start. Only callback with matching mode with what's in the StartOption will be called.

    public static void RegisterCallback( <4x full delegates> )

    public static void RegisterCallback( <4x minimal delegates> )

    Register as many callbacks as you want before calling NativeTouch.Start. Only callback with matching mode with what's in the StartOption will be called.

    This overload you can registered separate static callbacks for each callback type at native side.

    [iOS] This closely mirrors the native side's 4 callbacks approach. Each callback may receive multiple touches of differing phases but one of them will have the phase as the same as type of callback. e.g. with 2 fingers holding and you down one more you will get the "Began" callback with 3 times invocation of began, stationary, stationary.

    Do not misunderstand that Begin will be exclusively in Began callback, Moved will be exclusively in Moved callback, etc. This is how things work at native side too.

    [Android] On the native side there is only one callback. However one MotionEvent contains one "main action" By this main action we will call appropriate callback type to mirror the iOS way.

    public static void ClearCallbacks()

    Clear all the registered callbacks. You cannot clear callbacks while still in Start() state. You have to Stop() first.

    public static double GetNativeTouchTime()

    Use this to ask the time on the same timeline as the timestamp that comes together with your touch, which is not the same between devices.

    With this and an anchor Time.realTimeSinceStartUp remembered you should be able to convert those timestamp to meaningful time for your game.

    [iOS] Based on ProcessInfo.systemUptime in iOS's API. The unit is SECONDS. See documentation

    The API returns double and it is retured as-is.

    [Android] Based on SystemClock.uptimeMillis(); in Android's API. The unit is MILLISECONDS.

    The API actually returns long, but converted to double to be in line with iOS.

    public static Vector2Int RealScreenResolution()

    Native Touch's coordinate is unscaled, if your game uses Resolution Scaling (Or dynamic resolution scaling) then Screen.width/height/resolution/etc will be scaled down too. If you use that to calculate something with Native Touch's returned coordinate then it is wrong. (Unity's Input.___ API will be scaled down too)

    Since native side has no idea how small Unity is scaling things, the approach is to use this method to return resolution of Screen.__ API as if it wasn't scaled. It got the same bounds as what coordinate that is returning from Native Touch. Then you could calculate the touch coordinate into the scaled coordinate.

    [iOS] See this table, on the Native Resolution column, that's Unity's coordinate when resolution scaling is disabled. This method return values of that column by multiplying UIKit size with Native Scale Factor. Points from Native Touch has been processed in the same fashion.

    public static NativeTouchRingBuffer touches

    An entry point for ring buffer iteration API. The use is similar to Unity's Input.touches, however you need to "use it up" by iterate the touches manually in this.

    For example, Input.touches in the same frame, multiple MonoBehaviour could check on it and see the same thing. But content it it suddenly change to another set in the next frame. It is possible to miss the touch, but it is impossible to "use up" the touch in the same frame.

    On the other hand, NativeTouch.touches will continuously adds new touch event from native side. When you iterate over them, you are "using the touch" and the pointer moves forward. You cannot do it again. So decide the central place where you want to handle touches in your main thread code.

    If you didn't iterate through them for some reason, they will still remain there even to the next frame. Continuously accumulated with new touches. (If you didn't iterate over a certain limit specifiable on StartOption, it will start discarding the early ones.)

    Also the data inside this is still the same as callback-style NativeTouchData. That is for example, in Input.touches you could check for held-down touch and see it available continuously in every frame as long as the finger is down. For NativeTouch.touches, you will see just one DOWN event when you try to dequeue it. It's how the touch natively works.

    So an another way could think about this API : you are still doing the callback way but the code in callback is just putting those touches in the central queue, waiting for your MonoBehaviour code to come and use them.