Checking for Memory Leaks with Xamarin Profiler

Once your App is in full breeze, met all the functional requirements and has passed all functional testing, there’s one more thing which is worth considering – performance testing. No matter how better you have addressed the functional requirements, there is always room for unexpected pitfalls due to memory leaks and performance bottlenecks. By experience we’ve seen some cases where data related issues (such as missing data, corrupted data, etc.) lead to memory leaks,  and consequently end up crashing the app. It’s hard to catch these crashes or even reproduce them, unless you have profiling tools (such as Xamarin Profiler) at your disposal, least to get a hint of what’s happening.

Xamarin Profiler stands out as a worthy candidate as it comes built-in with Xamarin for Visual Studio and for Mac.  Outlined in the following are some facts that are deemed important when profiling memory with Xamarin Profiler.

Note: The following has been done on Android API Level 23 (Debug mode).

The first step is to fire up Xamarin Profiler. In Visual Studio solution go to Tools->Xamarin Profiler.

Image

This will build the Xamarin Droid project and will deploy the solution into our Emulator. At the same time this brings up the Xamarin Profiler window (see below).

Image [2]

Select Allocations from the pop-up dialog-box as we intend to capture memory allocations. Then click Next.

Image [3]

Enable Automatic snapshots (tick the check-box) as it will enable us to compare the state of memory between certain time intervals. This really helps us visualize the amount of memory being utilized across time and if there are any memory leaks, it usually displays a consistent increase in memory.  By comparing against different snapshots we’d be able to identify what classes are responsible for memory leaks.

Allocations Analysis

Select Start Profiling.

Image [4]

In this case, our profiling session ran for 1 minute and 48 seconds. At first glance we could see a graph at the top. This graph represents the overall memory allocations of the app, and it’s our first line of support to detect memory leaks. If we are having a memory leak this graph will go on stacking up memory higher and higher because the App doesn’t free up memory and gets accumulated over time. However in this case, our App comes down to a leveled position after a while which is a good indication.
Underneath the graph towards left, is a list of classes in the App itself.  These are in fact the classes that occupy memory during this session.  The table provides some useful hints as to what classes are responsible for consuming memory. As an example, System.String accounts for the highest occupation. It has created 72,866 total objects throughout this session and at the time of taking this snapshot 13,491 objects are still active. This is quite usual because String type is immutable and each time we create a String, a fresh memory address will be allocated.

Image [5]

When double-clicked on a class it takes into another view where it shows the active objects and their respective memory addresses.

Image [6]

Towards right is a pie-chart that gives us a visual representation of memory consumption based on data types. It usually classifies main data types individually and any user defines types will be put under the Other category. Hence in this case, we have String, Byte, Int32 and Char[] and Other (for user defined types).

Image [7]

Snapshots

The other important thing about Xamarin Profiler is the Snapshot feature. Since we’ve enabled auto snapshots, the profiler has given us the following list at regular intervals. Note that, the Size is rising with each consecutive snapshot but drops down in snapshot 14. This is also reflected in the Count column, where 21,865 allocations being freed from memory compared to the previous snapshot. So if there is a memory leak we can only see the size rising up with no significant drops. But in this case it’s looking good.

Image [8]

It’s also possible to compare between two different snapshots taken at two different time frames. Just right-ck->Compare To->Select the snapshot.

Image [9]

We can pin-point the root cause if we happen to see any significant rises in allocation counts. Just right-click the class->Show in Call Tree. This will take us down to the following view. It highlights the method that caused the allocation and along with we can trace back down through the method call stack. In this case, the method responsible for allocations (i.e. FastAllocateString) was caused by an ExceptionHandler in Android runtime. However this was not a memory leak but just an exception handler creating a bunch of strings. If the call stack points to a method in your code, you can fix the issue in your code-base and re-run the profiler to test it.

Image [10]

Outlined in above are some tips to know when profiling memory with Xamarin Profiler. The main things to look for are the allocations graph and the snapshot sizes. This will in turn saves the developer time, and any technical debt which could be  introduced by improper use of memory.
Advertisements

Estimating the Focus of Expansion (FoE)

One of the most intriguing things about Optical Flow is the calculation of Focus of Expansion or FoE. FoE is the point where all flow vectors converge when  a camera moves in a forward direction. It often indicates the direction where a camera is heading or aimed at, and is also regarded as the same point where flow vectors diverge from when the camera moves backwards. FoE plays a key role in most computer vision applications, including self-driving cars, guided missiles, obstacle detection and in robotics. To succeed in its applications, FoE should be calculated in real-time consistently at a fairly accurate level. The following is a demonstration of my algorithm for this, where I estimate the FoE using a probabilistic approach. This calculates the FoE for a single camera that has an inconsistent motion model.

The red circle signifies the FoE in question below.

Here I first obtain a set of sparse flow vectors as described in here. The flow vectors are obtained at frame rate using the Lucas Kannade method and when the number of flow vectors fall below a minimum threshold, flow vectors are re-calculated. Then the algorithm seeks the linear functions for those flow vectors (normalized) and then finds the intersection points for those linear functions. In the ideal case, these intersection points must coincide with each other as it theoretically represents the FoE, but due to the error, they don’t. Therefore we need to filter out the error. To do this, intersection points are arranged into a histogram and then the maximum bin will be taken out. Intersection points that fit inside the maximum bin are further filtered out through a Discrete Kalman Filter in order to estimate (or predict) the most likely intersection point (i.e. FoE).

 

Optical Flow with OpenCV

Optical Flow is a technique for tracking flow of image objects in the scene. The output of Optical Flow is a series of flow 2D vectors which in turn is called as the Flow Field. It has numerous advantages and has been extensively used in self-driving cars, autonomous robots, and in assistive devices for the visually impaired.

The key concept behind Optical Flow is to calculate the intensity displacement through time and space. That said, optical flow can be thought as a function of tracking brightness across time and space. For example, given a point p0at time t0 and its intensity as i0,

i0 = I(p0, t0)

where I is the intensity function of a given point at a given time. Then let’s assume the intensity is i1 for the point p1 at time t1 during which a motion has occurred that made the original point now locate into p1. Now,

i1 = I(p1, t1)

According to Optical Flow, it is considered that,

i0 = i1

In other words it assumes that image brightness (intensity) is independent from camera motion.

Having followed a tutorial in OpenCV, I managed to put together my own Python code for a simple demonstration of Optical Flow using the Lucas Kanade algorithm [1].

[1] https://docs.opencv.org/3.2.0/d7/d8b/tutorial_py_lucas_kanade.html

Color Object Tracking with OpenCV

Color tracking is fairly a simple activity in OpenCV and you may find a variety of articles in the Internet which describe different mechanisms and implementations. However the common steps involve the following.

  1. Convert the source frame into HSV color space.
  2. Define the upper bounds and lower bounds for the color you intend to track. In my example I mind the blue color and so the lower bound and upper bounds are respectively (100, 150, 0) and (140,255, 255) where the lower bound being the darker blue.
  3. Find the image mask that contain pixels which fall within the lower and upper bounds as mentioned above.
  4. Find contours in the mask image.
  5. Find the maximum contour in terms of the contour area. So that we can omit trivial contours and focus only on the most dominating blue color object in the scene.
  6. Approximate the maximum contour into a polygon.
  7. Find the bounding rectangle of the above polygon.
  8. Draw the rectangle from above step on the source image.

The code that follows the above series of steps is provided below.

#This is an exercise to track blue color from camera.

import numpy as np
import cv2

cap = cv2.VideoCapture(0)
while(True):
ret, source = cap.read()
if(not ret):
print('Error in video capture')
break
hsv = cv2.cvtColor(source, cv2.COLOR_BGR2HSV)
lowerblue = np.array([100,150,0])
upperblue = np.array([140,255,255])
mask = cv2.inRange(hsv, lowerblue, upperblue)

#find contours in the mask
im, contours, hierarchy = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

maxContour = max(contours, key=cv2.contourArea)
epsilon = 0.1*cv2.arcLength(maxContour, True)
approxpoly = cv2.approxPolyDP(maxContour, epsilon, True)
x,y,w,h = cv2.boundingRect(approxpoly)
cv2.rectangle(source,(x,y),(x+w, y+h),(0,255,0), 2)
cv2.imshow('original', source)

if (cv2.waitKey(1) & 0xFF == 113):
break

cap.release()
cv2.destroyAllWindows()

The demonstration is shown below.

How to think like Steve Jobs..

41eiLo9-t-L._SX311_BO1,204,203,200_I love books that add value to our lives and provide us with lot of inspiration. This particular book titled
How to think like Steve Jobs” happened to grab my attention out of nowhere when I was browsing through the personal library of my wife’s father. It’s true that there are numerous books written by many, under the label of Steve Jobs, and I believe they all share the wisdom of Steve Jobs in overall, no matter how different their stories are.

In the course of reading this book “How to think like Steve Jobs“, I was able to harness an array of material on this man, once known as the hardest-working man in Silicone Valley, and how he transformed the computer and music industries. He was known to lead Apple from its humble beginnings, starting it off in his parents’ garage to the global empire it is today. So the big question for everybody, (especially after his death) was How did he do it? In this book, the author tries to answer this question by drawing out key elements on Job’s life and work, and about his vision towards technology. In fact, the book suggests that everything Jobs produced (Mac, iPod, iPhone, iPad) has the personality, charisma and his style built-in to them. So it’ll be worthy to study his life rather than his products so we could add some value to our lives and view the world from the eyes of a visionary.

In this post I intend highlight some of the key elements from this book, that took my attention, as in below. Continue reading “How to think like Steve Jobs..”

Abdul Kalam: A Valuable Role Model

Wings-Of-FireDr. Abdul Kalam is perhaps India’s most admired technocrat who once led the country’s space rocketry and missile programme into monumental heights. I recently came across his autobiography ‘Wings of Fire‘ which is a fascinating account of his life and work and a story full of inspiration. In spite of being brought up in a rural village and a son of an ordinary boat-owner, his ceaseless courage was ultimately well paid off at the end by becoming the most distinguished person in India – the country’s 11th President. But yet, what captivated me most is his humble and unassuming nature and the way he maintained it throughout his life. Following are some of the useful traits that I could pick up from his deeply passionate personal story, which I believe will enlighten our lives too.
Continue reading “Abdul Kalam: A Valuable Role Model”

AR and its Role in Marking Space

It is perhaps surprising to realize that only two things in this world have troubled man’s ingenuity for centuries, i.e. space and time. These two are absolute benchmarks, often used when making a reference to a physcial object in space or when describing a past incident, though it is hard to understand why do we always attribute our actions or events in relation to space and time. For instance, a special event (a birthday perhaps) can be expressed in relation to time by marking that occurence on a calendar either digitally or manually. We are capable of doing this since ’time’ as we know of, is one dimensional. It is somewhat puzzling at this point, should we deal with space in the same manner, because space is three dimensional and it provides freedom for travelling in multiple directions, as opposed to the single dimensional nature of time. These implications led our curiosity to focus on one implicit feature, yet something strange about space – ”How can we mark space?”. I shall later describe the background for arriving at this notion. For the moment let us accept this question and describe its logic by an analogy with our understanding of space.

From a biological point of view, human beings tend to use physical objects for designating places of interest that often help them representing space and constructing three-dimensional cognitive maps [Egerton, 2005]. The mammalian spatial referencing patterns, as described by Egerton [2005] organise physical objects in the form of a trail, for tracing out specific points in their respective environments. Imagine you were exploring an unknown and complex environment and wanted to find your wayback after an exploration. One solution would be to mark your trail with pebbles. The pebbles would persist and you could readily trace-back your path in return, unless an ill-tempered being removes all the pebbles from your sight after you placed them. Extending this concept, imagine we could mark out any point in space, with pebbles that remain persistent over time. In such a way, we could pin-point an arbitrary location – even a point somewherein front of our eyes – freely in any perspective while tracing out complex paths in all 3 dimensions of space. Extending the idea further, if the pebbles could convey information then they could be used to pass messages or communicate information to other travelers. Further still, if pebbles could express relationships with their neighbors, complex process models could be expressed. Continue reading “AR and its Role in Marking Space”