About Project

Hands Up, is an interactive educational installation; it is designed to be an activity for reading time, in primary schools. Our installation is an interactive story, in which the users become a main character and participate in the story in order to unveil the plot. The storyline features, the protagonist, Mason who is a Junior Detective – he investigates crime and solves mysteries with the help of users.

The setup of the installation requires the program to be projected on to a blank wall, whilst children interact with the Kinect and experience a learning adventure. Only one child will interact with the Kinect at a time; we intend to give the group of children a paper Junior Detective badge with a unique ID number – one ID will get called out by the installation at a time, whenever an opportunity to interact with the story arises.

Additional Comments

The last stage of implementation was hard and messy. Each of us provided bits of code which on their own worked perfectly, but there were a lot of adjustments to be made once having them in the main body. The biggest obstacle was trying to create separate classes. Since the beginning we tried to create the character in a class, in which we partly succeeded; but trying to retrieve the skeletal points from a separate class was nothing short of a nightmare. All kind of errors and bugs popped up, at the very best we had the program running but not showing anything or crashing after very little time. Since almost every scene depends on the position of the hands, we had to put all the code all in one place. Obviously there is a solution to that, but being so new to OpenFrameworks and C++, most of our time was spent making the code functional and running as smooth as possible, and this was left for dead. The other big issue was switching states. Even though for most of the screens was very straightforward, the firefly screen took a very long time (and is still not perfect). That is due to what was mentioned before: all the code is in the same file (testApp.cpp), and that creates a lot of interferences between variables and statements. Overall we think the prototype looks quite clean. It’s a shame we ran out of time and could not develop our project further.

The Hands Up Bunch.

User Testing #2

After the feedback from our project adviser, we needed to find a second a collective of children to test with after the suitable changes, this proved to be really difficult as we had ahead Spring break, and the children would be having exams at the end of the month,  and so after many calls, and emails to  45+ schools in the areas of  Lewisham and Bromley, finally we had to turn to another less conventional educational organism, and after some negotiations we received a positive answer, we turned to an Indo-American Charity in Brixton called IRMO where amongst other things they provide support to families to establish themselves in London .

The people at the charity were really welcoming and offered  their space and interested attitude to our project, we asked for only 7 children and got more attention that we bargained for having way many more children, and so with a whoooping number 15 children that wanted to try our interactive adventure; we are delighted to allow them to do so and we did, we found some difficulties but the reception and the chemistry with the crowd was good. Everyone was pleased and very involved, however as ambitious as we are , and because we would like to make it perfect we have put together a couple of observations to improve.

– They get carried away on the activity, forget they need to interact and go outside the Kinect area.
– Sometimes will run , showing the screen but interaction wont work.
– Sometimes kids do not fully understand the gesture.
– Generally people with right hand would and would prefer to use this for games.
– The audio should be re-forced with Text as it is a bit long to gather all the information in one go.

Anyway a great day and another great reason to work in the learning field, thank you very much for reading.

Video of testing

Development Progress #3

As deadlines have been closing in, and the pressure to get a working product is setting in. We decided to really break the application down into a few primary states. The original wireframe featured an elaborate storyline with a larger quantity of graphics, audio, gestures and features, however, we have had to modify this plan to make our project realistically achievable.

Within, this modified wireframe we will have two types of scenes – active and passive; the active scene is where users will participate in the plot by interacting with the Kinect in order to complete a task, such as catching fireflies, whilst the passive scenes simply display an animation with play dialogue. The questions that arose during the developing process were: how do we progress from each scene/state,  how do we use each state effectively/run the code correctly so we do not have anything running in the background?

We created a Character class to control the full-body avatar of the user. Initially, we had major problems constructing this class, for some reason, although there was no fault in the code, OpenFrameworks would generate linker errors, which corrupted the entire piece of software. OpenFrameworks had to be reinstalled 3 times and OpenNi addon manually reinserted each time due to these linker errors. We managed to avoid this happening in the future by implementing most of our code in testApp.h and testApp.cpp. Another frustrating process was at times, when attempting to transfer the files to GitHub, the OpenFrameworks folder would be wiped out, which meant reinstalling everything all over again, this made us a little nervous everytime we tried to use git and also made us appreciate back up files and our backed up back up files.

We structured our program into states, each state launches a different scene in the story. The animation launched in the initial scenes, was implemented as an image sequence. Initially, we attempted to implement the animation in a way that is similar to Processing, where we create an array of images, access them via a for loop and then draw the images in the same location at a certain frame rate, however, we had issues implementing the image arrays and for loops – only one static image would display each time. Thereafter, we decided to create an ofDirectory object, which would access each frame within the animation straight from the folder; these are loaded into an array and displayed one image at a time, in accordance to the set duration and time. The image sequences all play on loop, they are triggered after specified if statements become true. OpenFrameworks seems to love vectors, this was how we stored our image sequence array, initially, when we made the array we wrote: vector<ofImage>seq but noticed a bit glitchy and lagged slightly whenever the program was launched, thereafter, we changed the type of array to ofTexture, which meant the program would run at a normal pace.

We created an ofFirefly class, which controls the movement and functions related to the fireflies. We designed the fireflies as balls of light rather than tiny fireflies, since this made the scene run faster and function properly. When we tried to include more graphics in that particular scene, we noticed the tracking detection of the limbs of the character avatar would lag slightly, which made the scene look strange and confused users as well as us.

Our main aim at this point in time is to ensure our code is functional; that if users were to participate, they would understand what to do and be able to perform all the gestures properly.

Beanstalk image used for one of the states.
Beanstalk image used for one of the states.

User Testing #1

First of all, creating a project that targets young children, is a very difficult and frustrating process; mainly because gathering participants is one of the hardest things to do. Important procedures have to be considered, such as carrying out a DBS check, getting parental consent, carrying all the installation equipment from one location to another (might I just add, on public transport) and managing time (since children get bored and parents have places to be). This post explains in more detail, some user testings we tried to arrange but did not work out. Eventually, we decided to carry out the method of snowball sampling, where we invited one parent and their children and asked them to inform others about the user testing, if they were interested they can show up too.

User Testing #1

We managed to (finally) arrange a user testing with 4 participants. We set-up the equipment, passed around user testing information leaflets and dealt with the consent forms. We then launched the installation and observed how the participants would understand our program. Thereafter, we conducted a group discussion to find out their opinions. The parents of the participants want the footage of the user testing to be private and we respect their wishes, however, you can read the information leaflet to get the gist of what the user testing is all about: Consent Form Information Leaflet .

Evaluating this Process:

At this point in time, our program was not fully intact; it had no sound, minimal graphics and consisted only of two scenes. Since the program was quite basic, the children did not understand what to do. We realised, we needed to include some sort of mini demo within the program to guide them for how to pass the scene and that including audio that would tell the story and subtly instruct them of what to do, would be a good way of guiding them.

Children within Key Stage 2 height varies quite significantly, some kids are quite small whilst others are fairly tall; we learnt that height matters when it comes to the Kinect. During the testing, we had to constantly adjust the position of the Kinect – we had to make it lower or higher, depending on the height of the user in front of it.

We placed a little mat on the floor so that users would remain within the trackable zone, which proved to be quite effective. At times the children would become a bit too excited and move ever so slightly outside this region, however, the distance moved was not too far, therefore this did not impact the running of the program in any significant way.

Since, the children participating in this user test did not seem to mind waiting for their turn; they would excitedly but patiently anticipate their go. Some of the participants had younger siblings (who were in Key Stage 1 and Nursery) who were waiting beside their parents; at times they would break free from their parents grasp and run around the room and in front of the Kinect. This was a really reassuring observation to witness because it made us grateful for choosing a slightly older and more mature target audience; they are easier to communicate with, behave better and understand things quicker.

So far, the program does not really help children improve their English skills since there is no text to read or dialogue. We think including audio for the character dialogue and possibly text that lights up as it is being read will help the children improve their vocabulary.

During the group discussion, the children said everything we observed; they did not understand the instructions and they felt it was not an effective learning tool, however, they did like the project and were curious to see what the end result would be.

Overall, the children were happy and remained interested throughout the whole process; thankfully, their feedback was honest and constructive.

We aim to: 

  1. Insert audio for the dialogue and sound effects for backgrounds
  2. Insert more graphics and better animations
  3. Include mini demos/tips for scenes
  4. Put in the other scenes from the storyline

Test setup:

User testing set up sketch 1User testing set up sketch 2

User Testing Arrangement Struggles

Trying to gather a sample group in order to conduct our user tests has proven to be one of the most difficult challenges during this project. We following our proposed method schedule and starting sending out emails to the Goldsmiths outreach team, local libraries and schools in Lewisham. Often we had to communicate with people who would reply after long periods of time, some instantly rejected the project, others led us on for a few weeks and then rejected us and one case, confused our target market with secondary school students, therefore we had to decline that option.

One main event we were close to organising was carrying out a user testing at an event called, ‘Hack the Library’ – after a few weeks of talking, the organisers decided not to go forward with our project, they stated their event was more about ‘sharing skills to make and create things’.

Another event we almost managed to book was the ‘Science and Tech Fair’. However, we found the responses received to be quite vague. We spend a few weeks discussing the event and explaining the project and our user testing aims – the organisers seemed quite positive. The organisers declined the user testing a few days before the actual event; they preferred to have a finished, polished product to feature at their event, rather than a prototype to gather user testing data. They also would only be able to provide us with gifted and talented Key Stage 2 children rather than those in booster classes.

Some schools rejected our project based on issues with schedules and timing. The dates children have half terms vary and many schools did not want Year 6 children participating since they were preparing for their SATs.

Development Progress #2

Finding out how to get and use the different joints on the skeleton tracking has so far been much like the rest of the project programming-wise, more difficult than intended. Though, through reading and trawling the internet, and talks with our supervisor we have some skeletal functions working.

It turns out that using OpenNI and the skeletal tracking in 3D space really boils down to 2 categories; world position and projective position. World position, as you can imagine, relates to the position of the joint from where it is being tracked in the real world from the Kinect. The projective position is the manipulated position, you can take the projective position and change it to suit your application (e.g. drawing a character on-screen).

So far we’ve managed to get images drawn at certain points (the head, torso, hands and feet) as a basic prototype. It may not look much, but this really solidifies our next step in the creation of this application.

Here’s a quick video to show the tracking with the images working.

During this time we have been recording the dialogue with professional recording equipment, the script can be found here: 3stateScript and developing the graphics for the program. Initially, we tried making the graphics all fresh out of illustrator but we found this process too time consuming – it took a few hours just to make one keyframe and in addition to this we felt the design was not be that appealing to young children, since they prefer more cartoonized characters. All the animation and graphical features have been drawn by hand, scanned and then digitally edited via Adobe Illustrator and Photoshop to give all the images a vectorised and polished finish. The design goal to create a cartoons that look like paper cutouts; we believe this will make our design original and interesting to users. We want the artwork to be bold, colourful and like a cartoon, since we feel like our target market will find this type of design appealing. Here are some mock sketches:

panda_animationpandora bunny

More updates, coming soon:

The Hands Up Bunch.

Development Progress #1

Recently, we’ve had a lot of progress with our project, code-wise. As we’re working with OpenFrameworks and OpenNI we’ve had to get to grips with this external library, which has proven to be more difficult than anticipated seen as none of us have even worked writing in C++ before. We have had the Kinect working for a little over a week and now the skeletal tracking is working as intended.

So why is this such a big step? Well, this means that we can now work out how to get access to the appropriate skeletal points to make use of them. For example, putting images on each of the points to create a character on screen, or detecting whether the hand has collided with an object (see our wireframe for a better idea).

Here’s a quick screen capture of the skeletal tracking working.

More updates coming soon,

The Hands Up Bunch.

Proposed Method Schedule

In order to ensure we are working in an efficient manner and are able to complete this project by the deadline set, we have made the following proposed method schedule, than can be accessed here: Proposed Method Schedule .

We have considered the skill sets of each of our time members and assigned them with tasks that suit their creative abilities.

Evaluating this process [post-development] :

We believe the designation of tasks was distributed fairly and effectively; everyone enjoyed implementing their contribution to the project and felt like it suited their skill sets. Although as a group we had fixed tasks to complete, there were times when we would interchange roles and temporarily trade different tasks, which meant everyone was doing a bit of everything. We definitely underestimated the amount of technical difficulties we faced, which set us back quite drastically in terms of time. Our team had no prior knowledge of C++, OpenFrameworks or the Kinect, which meant the very early stages of our development consisted of simply watching tutorials and teaching ourselves everything about those topics. Getting the OpenNi addon working was the most time consuming and difficult aspect of this project, Mac users found it tricky but managed to get it functioning after some time, whereas Windows users found it incredibly tricky and took significantly more time to find a solution.

After this hurdle was overcome, we had a shorter amount of time to implement the code and arrange for user testings. Attempting to organise user testing with young children is a very hard and arduous task; more detailed information about this process is written in this blog postWithout user testing data it becomes harder to figure out what features need to be modified and what the next step to carry out is. We found that this lack of direction meant we were having to predict what to improve, we were designing a product based on our perception rather than the user’s response.

Personally, we feel that we should have started to attempt arranging user testings sooner to allow for rejections. Throughout this process we wanted to avoid arranging user tests on participants we personally knew, however, we regret this decision to a certain degree; participants we know would be available for user tests on a weekly basis. If throughout the development process, we showed our project to children on a weekly basis, we would have generated a lot of useful information. Although, the general rule is you should not conduct user tests on a sample audience you personally know, we feel like we should have done this, – more data is better than a limited amount of data.

Evolution of the Hands Up Wireframe

Earlier Stage:

Initially, the project was designed to include a fairly long storyline, more graphics, audio, gestures, include 2 participants simultaneously and display text, which lights up as it is read. This wireframe can be accessed here: Wireframe.

The wireframe consists of two types of scenes; the interaction scene, where the user participates in the story by becoming the main character to (e.g. search for clues in a dark forest) and animation scenes, where an animation and audio dialogue would play. The interaction scenes consist on using certain gestures, such as making swimming motions with their arms and legs or chopping down a thorny forest.

During the development process, we realised we had come up with an idea that was not feasible; bearing the time scale and our limitations in terms of our skillsets, we decided to shorten the storyline and reduce the amount of features in the program.

Later Stage:

Our wireframe evolved to become this: 3 state wireframe. We kept the set-up of the project the same; we still used interactive scenes and animation scenes, however, we shortened the story and included less gestures, since we felt this was more of a realistic goal. We managed to implement the full storyline and all the features we aimed to include. The only difference is we decided to make our project only allow 1 user to participate at a time instead of 2; this was due to Kinect’s limited detection abilities, which made it harder for two person’s limbs to be detected and made the tracking almost completely undetectable; children’s heights also vary, which we noticed affected the Kinect’s ability to track, we would constantly have to shift the Kinect’s position higher or lower, depending on the participant’s height. When two children with different heights participate at the same time, the Kinect does not cope very well.

One feature we really wanted to implement but were unable to do so is include text, which lights up as it is read. We felt like this feature would be a useful way to help children recognise English terms, improve spelling, vocabulary and grammar. If we had more time we would have also liked to lengthen the storyline to make the narrative more interesting and understandable.

Overall, we felt satisfied with our artwork, audio produced and used, gesture recognition implementation and how well we worked as a team.