fredag den 8. marts 2013

Cucumber testing on mobile phones



In this post I would like to advocate for a graphical user interface I wrote for writing and executing Cucumber tests. It is particularly useful for testing smartphone apps. By combining it with calabash for Android and iOS you get a strong test setup easily manageable by both technical and non-technical testers as well as developers.

Q-Cumberless Testing is easily extendable as it allows third-parties to write their own plugins and can thus be used in any test setup based on Cucumber.

Watch a screencast on youtube.

The source code and binary distributions is available at github.

fredag den 7. december 2012

iOS - showing only the shadow of a CALayer

I'm recently doing a simple iOS puzzle-game using the Core Animation framework in which the player has to move various pictures into place. The correct locations are visualized by the blurred shadows of the pictures.

To achieve this without having to manually draw the shadows in GIMP (which happens to be my favorite open-source image manipulation program) I came up a very simple use of CALayer's shadow property: Simply offset the layer by, say, twice the screen width and the shadow by twice the screen width in the opposite direction, and voila! Only the shadow of your CALayer is visible - in whatever color you would like and with whatever blur factor you wish.

Here's some code:

layer.bounds = CGRectMake(0.0f, 0.0f, pictureRect.size.width, pictureRect.size.height);
layer.position = CGPointMake(pictureRect.origin.x + (screenWidth * 2.0f), pictureRect.origin.y);
layer.contents = (id)image.CGImage;
    
layer.shadowOffset = CGSizeMake(-(screenWidth * 2.0f), 0.0f);
layer.shadowRadius = 0.0f;
layer.shadowColor = [UIColor blackColor].CGColor;
layer.shadowOpacity = 0.7f;


No need to start up GIMP when one can produce the same effect in code! :-)

søndag den 13. marts 2011

Mac OS X multitouch

Here's some code I found to directly access the touchpad state of fx. the MacBook from C++ code. It uses the Apple MultitouchSupport framework (which comes with XCode) and can be compiled with G++ like this:

g++ -L/System/Library/PrivateFrameworks/MultitouchSupport.framework \
-F/System/Library/PrivateFrameworks \
-framework Cocoa \
-framework MultitouchSupport \
*.cpp


The header file is defined like this:

#ifndef TOUCH_H
#define TOUCH_H

#include <mach/mach.h>
#include <IOKit/IOKitLib.h>
#include <CoreFoundation/CoreFoundation.h>

#include <map>
#include <vector>
#include <iostream>

extern TouchPadState touchPadState;

extern "C" {

typedef struct { float x,y; } mtPoint;
typedef struct { mtPoint pos,vel; } mtReadout;

typedef struct {
int frame;
double timestamp;
int identifier, state, foo3, foo4;
mtReadout normalized;
float size;
int zero1;
float angle, majorAxis, minorAxis; // ellipsoid
mtReadout mm;
int zero2[2];
float unk2;
} Finger;

typedef long MTDeviceRef;
typedef int (*MTContactCallbackFunction)(int,Finger*,int,double,int);

MTDeviceRef MTDeviceCreateDefault();
void MTRegisterContactFrameCallback(MTDeviceRef, MTContactCallbackFunction);
void MTUnregisterContactFrameCallback(MTDeviceRef, MTContactCallbackFunction);
void MTDeviceStart(MTDeviceRef, int);
void MTDeviceStop(MTDeviceRef);
void MTDeviceRelease(MTDeviceRef);
}

extern MTDeviceRef mtDevice;

extern void touchDeviceStart();
extern void touchDeviceStop();

#endif


The implementation is as follows:

#include <stdio.h>
#include "touch.h"

MTDeviceRef mtDevice;

int touchDeviceCallback(int device, Finger *data, int nFingers, double timestamp, int frame) {
for (int i=0; i Finger *f = &data[i];
printf("Frame %7d: Angle %6.2f, ellipse %6.3f x%6.3f; "
"position (%6.3f,%6.3f) vel (%6.3f,%6.3f) "
"ID %d, state %d [%d %d?] size %6.3f, %6.3f?\n",
f->frame,
f->angle * 90 / atan2(1,0),
f->majorAxis,
f->minorAxis,
f->normalized.pos.x,
f->normalized.pos.y,
f->normalized.vel.x,
f->normalized.vel.y,
f->identifier, f->state,
f->foo3,
f->foo4,
f->size,
f->unk2);
}
return 0;
}

void touchDeviceStart() {
printf("Starting touch device...\n");
mtDevice = MTDeviceCreateDefault();
MTRegisterContactFrameCallback(mtDevice, touchDeviceCallback);
MTDeviceStart(mtDevice, 0);
}

void touchDeviceStop() {
printf("Stopping touch device...\n");
MTDeviceStop(mtDevice);
MTUnregisterContactFrameCallback(mtDevice, touchDeviceCallback);
MTDeviceRelease(mtDevice);
}


Hope it's useful!

torsdag den 9. december 2010

Force-based graph drawing


Hi again!

This time I will show you how my simple implementation of force-based graph drawing looks.

For background information, go to wikipedia's page. My implementation is based on the algorithm outlined there.

Pseudocode

Here is the pseudocode described at wikipedia.org which is used for the implementation:

set up initial node velocities to (0,0)
set up initial node positions randomly // make sure no 2 nodes are in exactly the same position
loop
  total_kinetic_energy := 0 // running sum of total kinetic energy over all particles
  for each node
    net-force := (0, 0) // running sum of total force on this particular node
         
    for each other node
      net-force := net-force + Coulomb_repulsion( this_node, other_node )
    next node
         
    for each spring connected to this node
      net-force := net-force + Hooke_attraction( this_node, spring )
    next spring
         
    // without damping, it moves forever
    this_node.velocity := (this_node.velocity + timestep * net-force) * damping
    this_node.position := this_node.position + timestep * this_node.velocity
    total_kinetic_energy := total_kinetic_energy + this_node.mass * (this_node.velocity)^2
  next node
until total_kinetic_energy is less than some small number  // the simulation has stopped moving


So, the basic idea is to group connected nodes while still keeping a distance between all nodes in the graph. This, of course, doesn't prevent edges from crossing each other, as it is merely a method for positioning nodes on the plane; however, it turns out that it does quite well visually anyway.

Defining a simple graph

First, given we have access to a class Vector2D, a simple graph consists of nodes defined as follows:

public class GraphNode {
  public List<GraphNode> in = new LinkedList<GraphNode>();
  public List<GraphNode> out = new LinkedList<GraphNode>();

  public Vector2D velocity = new Vector2D(0.0f, 0.0f);
  public Vector2D position = new Vector2D((float) Math.random() * GRAPH_SIZE, (float) Math.random() * GRAPH_SIZE);

  public GraphNode() {
  }
}


Main loop

Next is the main loop. It is a simple implementation of the above pseudocode. Given we have a list containing all nodes in the graph, allNodes, the main loop looks like this:

float totalKineticEnergy = 100.0f;

while (totalKineticEnergy > 0.01f) {
  totalKineticEnergy = 0.0f;

  for (GraphNode node : allNodes) {
    Vector2D netForce = new Vector2D(0.0f, 0.0f);

    for (GraphNode otherNode : allNodes) {
      if (otherNode == node) {
        continue;
      }
      netForce += coulombRepulsion(node, otherNode);
    }

    for (GraphNode otherNode : node.out) {
      netForce -= hookeAttraction(node, otherNode);
    }

    for (GraphNode otherNode : node.in) {
      netForce -= hookeAttraction(node, otherNode);
    }

    node.velocity = (node.velocity + netForce) * (1.0f - DAMPING);
    node += node.velocity;

    totalKineticEnergy += velocity(node) * velocity(node);
  }
}


Calculation of Coulomb Repulsion

Now comes the interesting part: Calculation of Coulomb Repulsion. It is straightforward. Coulomb's Law says:

F = ElectrostaticConstant * ((amp1 * amp2) / difference²

By choosing a suitable constant for electrostatic constant, amp1 and amp2, fx. 0.05, and multiplying by the normalized difference-vector to keep direction, the method looks like this:

private Vector2D coulombRepulsion(GraphNode n1, GraphNode n2) {
  Vector2D difference = n1.subtract(n2);
  float dist = (float) Math.sqrt((difference.x * difference.x) + (difference.y * difference.y));

  if (dist == 0.0f) {
    return 0.0f;
  }

  // (COLULOMB_REPULSION_CONSTANT * (difference / dist)) / (dist * dist);
  return difference.divide(dist)
                   .multiply(COLULOMB_REPULSION_CONSTANT)
                   .divide(dist * dist);
}


Calculation of Hooke's Attraction

Turning to calculation of Hooke's Attraction, this is just as easy. This is the formula used:

F = DifferenceBetweenLocation * ElasticConstant

Choosing an elastic constant, fx. 0.0001, it looks like this:

private Vector2D hookeAttraction(GraphNode n1, GraphNode n2) {
  return n1.subtract(n2) * HOOKE_ELASTIC_CONSTANT;
}


That's it!

Hope it was useful!

mandag den 18. oktober 2010

Frustum implementation

frustum.h

#ifndef FRUSTUM_H
#define FRUSTUM_H

#include "vector.h"

class Frustum {

public:

void initialize(float angle, float ratio, float nearDistance);
void destroy();

void update(VECTOR3D position);

VECTOR3D up;
VECTOR3D right;
VECTOR3D orientation;

VECTOR3D nearCenter;
VECTOR3D nearTopLeft;
VECTOR3D nearTopRight;
VECTOR3D nearBottomLeft;
VECTOR3D nearBottomRight;

float modelview[4*4];

float view[3];
float position[3];

float nearDistance;

float nearHeight;
float nearWidth;

private:

};

#endif


frustum.cpp

#include <gl.h>
#include <math.h>
#include "mymath.h"
#include "frustum.h"

void Frustum::initialize(float angle, float ratio, float nearDistance) {

// Register parameters
this->nearDistance = nearDistance;

// Compute the width and height of the near and far plane sections
float tang = (float) tan((PI / 180.0f) * angle);

nearHeight = nearDistance * tang;
nearWidth = nearHeight * ratio;
}

void Frustum::destroy() {
}

void Frustum::update(VECTOR3D position) {

// Calculate up- and right-vector
glGetFloatv(GL_MODELVIEW_MATRIX, modelview);

up.x = modelview[1];
up.y = modelview[5];
up.z = modelview[9];

right.x = modelview[0];
right.y = modelview[4];
right.z = modelview[8];

orientation.crossProduct(up,right);

// Normalize vectors
up.normalize();
right.normalize();
orientation.normalize();

// Calculate center of near frustum
nearCenter = (orientation * nearDistance * 1.2f);

// Calculate corners of near frustum
nearTopLeft = nearCenter + (up * (nearHeight / 2.0f)) - (right * (nearWidth / 2.0f));
nearTopRight = nearCenter + (up * (nearHeight / 2.0f)) + (right * (nearWidth / 2.0f));
nearBottomLeft = nearCenter - (up * (nearHeight / 2.0f)) - (right * (nearWidth / 2.0f));
nearBottomRight = nearCenter - (up * (nearHeight / 2.0f)) + (right * (nearWidth / 2.0f));

// Calculate view vector
view[0] = nearCenter.x;
view[1] = nearCenter.y;
view[2] = nearCenter.z;

// Calculate position vector
Frustum::position[0] = position.x;
Frustum::position[1] = position.y;
Frustum::position[2] = position.z;
}

Implementation of Real-Time Fog using Post-processing in OpenGL



Based on the paper "Real-Time Fog using Post-processing in OpenGL" (which can be found at http://cs.gmu.edu/~jchen/cs662/fog.pdf) I made a simple implementation of real-time fog in OpenGL/GLSL.

The basic idea is to render the scene, grab the depth buffer and then draw the fog onto a fullscreen, alpha-enabled quad textured with the overlaying fog using the information from the depth buffer. That is, for each fragment we reconstruct the coordinate in world coordinates. Then we integrate along the line of sight, that is, from eye coordinate to fragment world coordinate, to get the fog density.

So what we need to do is this:

1) Enable framebuffer and render the scene to the color and depth textures.
2) Draw quad with color texture to screen.
3) Enable shader program and draw a textured quad (with depth texture as input) blending wih the fog alpha (and possible color) calculated by the shader.

Step 2 can obviously be done as part of step 3.

Initialization:

We initialize the framebuffer and attach to it a depth texture. Then we create the shader program and initialize the frustum (see Frustum Implementation for implementation of a frustum).

// Generate framebuffer
glGenFramebuffersEXT(1, &framebuffer);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);

// Create color buffer
glGenTextures(1, &colorBufferTextureIndex);
glBindTexture(GL_TEXTURE_2D, colorBufferTextureIndex);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, VIDEO_TEXTURE_WIDTH, VIDEO_TEXTURE_WIDTH, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

// Create depth buffer
glGenTextures(1, &depthBufferTextureIndex);
glBindTexture(GL_TEXTURE_2D, depthBufferTextureIndex);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH24_STENCIL8_EXT, VIDEO_TEXTURE_WIDTH, VIDEO_TEXTURE_WIDTH, 0, GL_DEPTH_STENCIL_EXT, GL_UNSIGNED_INT_24_8_EXT, NULL);

// Create vertex shader from "const char *vertexFogShaderCode[]"
vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, vertexFogShaderCodeLines, vertexFogShaderCode, NULL);
glCompileShader(vertexShader);

// Create fragment shader from "const char *fragmentFogShaderCode[]"
fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, fragmentFogShaderCodeLines, fragmentFogShaderCode, NULL);
glCompileShader(fragmentShader);

// Create program, attach shaders and then link it
program = glCreateProgram();

glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);

glLinkProgram(program);

// Create frustum
frustum = new Frustum;
frustum->initialize(45.0f, (float) VIDEO_SCREEN_WIDTH / (float) VIDEO_SCREEN_HEIGHT, 5.0f * WORLD_SCALE);


Main loop:

In the main loop we first render the scene with framebuffer enabled and set to depth buffer.

First, we initialize the framebuffer and frustum and render the scene as normal.

// Enable framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer);

// Attach depth and color texture to framebuffer
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depthBufferTextureIndex, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorBufferTextureIndex, 0);

// Initialize view
...

// Update frustum
frustum->update(view);

// Render scene
...

// Disable framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);


Then we render the color buffer to the screen. You can avoid this step if you pass on the color texture directly to the shader program.

 // Specify orthogornal projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, VIDEO_SCREEN_WIDTH, 0, VIDEO_SCREEN_HEIGHT, -1.0f, 1.0f);

// Prepare render state
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glDisable(GL_DEPTH_TEST);

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, colorBufferTextureIndex);

glColor3f(1.0f, 1.0f, 1.0f);

glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f ); glVertex3f(0.0f, 0.0f, 0.0f);
glTexCoord2f(VIDEO_TEXTURE_RATIO_X, 0.0f ); glVertex3f(VIDEO_SCREEN_WIDTH, 0.0f, 0.0f);
glTexCoord2f(VIDEO_TEXTURE_RATIO_X, VIDEO_TEXTURE_RATIO_Y); glVertex3f(VIDEO_SCREEN_WIDTH, VIDEO_SCREEN_HEIGHT, 0.0f);
glTexCoord2f(0.0f, VIDEO_TEXTURE_RATIO_Y); glVertex3f(0.0f, VIDEO_SCREEN_HEIGHT, 0.0f);
glEnd();

// Reset state
glDisable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);


Finally, we render the fog using the shader program.

// Enable shader
glUseProgram(program);

// Pass view vector to shader
GLint viewLocation = glGetUniformLocation(program, "c_view");
glUniform3fv(viewLocation, 1, frustum->view);

// Pass position vector to shader
GLint positionLocation = glGetUniformLocation(program, "c_position");
glUniform3fv(positionLocation, 1, frustum->position);

// Prepare render state
glDisable(GL_DEPTH_TEST);

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, depthBufferTextureIndex);

// Draw texture with shader
glBegin(GL_QUADS);
glNormal3f(frustum->nearBottomLeft.x, frustum->nearBottomLeft.y, frustum->nearBottomLeft.z ); glTexCoord2f(0.0f, 0.0f ); glVertex3f(-1.0f, -1.0f, 1.0f);
glNormal3f(frustum->nearBottomRight.x, frustum->nearBottomRight.y, frustum->nearBottomRight.z ); glTexCoord2f(VIDEO_TEXTURE_RATIO_X, 0.0f ); glVertex3f( 1.0f, -1.0f, 1.0f);
glNormal3f(frustum->nearTopRight.x, frustum->nearTopRight.y, frustum->nearTopRight.z ); glTexCoord2f(VIDEO_TEXTURE_RATIO_X, VIDEO_TEXTURE_RATIO_Y); glVertex3f( 1.0f, 1.0f, 1.0f);
glNormal3f(frustum->nearTopLeft.x, frustum->nearTopLeft.y, frustum->nearTopLeft.z ); glTexCoord2f(0.0f, VIDEO_TEXTURE_RATIO_Y); glVertex3f(-1.0f, 1.0f, 1.0f);
glEnd();

// Reset state
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glEnable(GL_DEPTH_TEST);

// Disable shader
glUseProgram(NULL);


Vertex shader:

The vertex shader is very simple. It just passes on the vectors to the fragment shader, in interpolated form.

varying vec3 fragmentvec;
varying vec3 positionvec;
varying vec3 viewvec;
uniform vec3 c_view;
uniform vec3 c_position;

void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;

gl_FrontColor = gl_Color;
gl_BackColor = gl_Color;

gl_Position = gl_Vertex;

positionvec = c_position * -2.0;
fragmentvec = gl_Normal;
viewvec = c_view;
}


Fragment shader:

The fragment shader is a bit more complicated. This is where the fragment position is reconstructed to world coordinates and the integral calculated. Have a look at the paper for detailed explanations of how the calculations works.

The integral function should be replaced to fit your needs. You might want to let it depend on time - which lets you have dynamic fog!

uniform sampler2D tex;
varying vec3 fragmentvec;
varying vec3 viewvec;
varying vec3 positionvec;

void main()
{
vec4 zbuffer = texture2D(tex, gl_TexCoord[0].st);

vec3 fragmentunitvec = normalize(fragmentvec);
vec3 viewunitvec = normalize(viewvec);

float z = -50.0625 / (zbuffer.z - 1.0025); // z = -p34 / (z' + p33) = -((-2f*n)/(f-n)) / (z' + (f+n)/(n-f))
float u = z / dot(fragmentunitvec, viewunitvec);
vec3 worldfragment = positionvec + (u * fragmentunitvec);

vec4 fragmentcolor = vec4(1.0,1.0,1.0,0.0);

float integral = 0.02 * -(exp(-positionvec.y*0.01)-exp(-worldfragment.y*0.01)); // integral(e^(worldfragment.y))
float F = u * integral / (positionvec.y - worldfragment.y);

fragmentcolor.a = 1.0 - exp(-F);
gl_FragColor = fragmentcolor;
}


That's it!

I hope it was useful!