Updates on my Final Year Project

How’s my final year project going? I know it’s crazy, 2 weeks away from the submission deadline, I am writing a blog post about it. But I am travelling, so I have no Internet access and I felt like writing.

Screen Shot 2016-04-08 at 10.21.21

My report already exceeds 20,000 words and I am extremely happy with the literature review section. I am discussing  the current state of higher education in the sub-Saharan Africa region, as well as initiatives that support its development. I also go over the state of Internet availability, costs and the overseas Internet infrastructure that links Africa with the rest of the world. A Kenyan start-up initiative stands out on this topic, called Brck, which created a portable Internet router and the Brck Education platform. I have been surprised to see the great interest, both local and international, in supporting the development and access to technology in this region. The most notable initiatives would clearly be the annual e-learning Africa Conference, USAID and the Association for Affordable Internet.

Here is also an amazing TED talk from Juliana Rotich on Internet connectivity in Africa. It’s was presented in 2013 in Edinburgh.

Recent reports show that the most used Internet connection now is 2G, with a few countries such as Nigeria or Kenya adopting 3G. Also costs are not to be neglected, as it may be the case that sometimes the fee for an Internet subscription can reach 90 to 100 percent of a family’s monthly income.

I have also discussed the use of online learning technologies used in African Universities and other forms of higher education. It was clear to see that African students benefit widely from this type of new connectivity, which helps them communicate with other students or lecturers beyond their local area, explore opportunities, exchange ideas and explore new ways of accessing content.

My research then goes into online video streaming for slow Internet connections. An overview of the media streaming process is included to offer a bit of context. Then the report goes into more detail on adaptive bitrate streaming over HTTP and how it is designed to react to adverse conditions. I have been lucky to find a wide range of research papers on the topic, with experts discussing all sorts of new methods and algorithms to make the video segmentation and streaming process as seamless as possible. It is no news that in a decade, video streaming will make up approximately 80% of all Internet usage. So interest is high from all companies that revolve around Internet use. We can see how services like Netflix and YouTube, as well as Facebook’s live video and video content gain more users day by day.

The protocol I selected for my project, DASH (Dynamic Adaptive Streaming over HTTP), is an open-standard developed by the MPEG consortium with the support of major technology companies, such as Microsoft and Adobe. They all form the DASH Industry Forum. The need for a standard was long due. It is based on HTML5 and the Media Source Extension, which makes it available in the vast majority of web browsers. We all remember the Adobe Flash Player, needed a couple of years ago (and still used on some websites, I won’t name them!) in order to view video content. Oh and let’s not forget Microsoft’s Silverlight. Many alternatives which only annoy the end-user and offer more downsides than advantages.

I also did a bit of research into CDN and video encoding providers and I decided to go with the Amazon Web Services Cloudfront and Bitcodin. They work really well together, as all my encodings are directly transferred to my S3 bucket. I wrote more on this topic in this post.

The development stage of my project’s prototype is now complete and I am currently testing it with University students, including some from the Nigerian student society at my University. I am super happy to receive their feedback and see how well I’ve managed to address the issues. Oh, I’ll also borrow some devices this week from our technical support desk so I could test the streaming implementation on real feature phones and tablets with slow CPUs.

Overall, I couldn’t be happier with the topic I have chosen and I can see this from the amount of work I am able to produce for this project, compared to other modules. As I have said, just 2 more weeks until I submit my report and then I’ll have to present it by mid-May in front of my supervisor and examiner.


EnhanceConf 2016 London

Last Friday I have been lucky enough to obtain a scholarship to attend the EnhanceConf. Based on the concept of progressive enhancement in web development, the conference was the first of its kind and gather a set of incredibly good speakers.

The principle of progressive enhancement encourages developers to first build applications that are simple and have great performance on any device and browser, even offline. It then slowly builds on top of the basic requirements to add enhancements suitable for the newest devices, latest browsers and fastest Internet speeds. You can find Aaron Gustafson’s article for more details on the concept, as he coined this term.

The learnings I have gathered from this conference are incredibly useful for my final year project. The biggest challenge for me is to deliver the fastest experience on mobile devices with low processing power and slow networks such as GPRS and 2G. The size and the number of HTTP requests should therefore be minimised as much as possible, making sure users get all the features they need without wasting their time or data.

By far the best talk of the day came from Jen Simmons, a web designer and developer with great experience and ideas. Her talk was entitled ‘Progressing Our Layouts’ and emphasised the fact that we are able to create amazing online experience with little to no JavaScript, by embracing the capabilities of CSS. I realise we usually underestimate the power CSS has and how beautiful it is to simply design in your browser. CSS has great error handling. As she outlined, if a CSS rule is not supported in the user’s browser, it simply is ignored. Nothing bad happens if instead seeing a border radius of 5px, they’ll get a square. On the other hand, if there is one error in your JS, your entire website can be compromised. Of course, that is the point of JS testing, but it is a nice example to outline the difference of using JS for enhancing a website. I very much agree with her also on the unnecessary extensive use of UI frameworks nowadays. I know they make life easier for many developers, but they also make all websites look the same. They lose their individuality and gain page weight.

My other favourite talk came from Stuart Cox, developer at Potato London, gave a very good presentation on progressive enhancement and fighting fragmentation. He presented the link between features (core functionality + enhancements) and capabilities (browser, device, human). He proposed the use of atomic enhancements (small enhancements that are either applied fully or not at all), feature detection (check on the user’s capabilities) and grouping of features as modules. An overall great philosophy towards web development and application structure.

Forbes Lindesay (Facebook) spoke about server side JavaScript, React and shell rendering. Oliver Ash, web developer at the Guardian, explained how they worked on the offline page of the online newspaper with the help of service workers. Also, during the morning session, two other great talks came from Stefan Tilkov on web architecture and how to use the browser smartly and Anna Debenham, who has conducted an in-depth study on console browsers and their use.

Overall, this conference was a great learning experience for me. It offered me an insight into where front-end is at the moment and where it is heading. Hearing from professionals with experience in this field, made me realise how important it is to always stay updated with the latest trends, keep up with development and be flexible and open to new ideas, while still having your own set of principles. In a nutshell, to adapt.

The organisers have also created a website dedicated to Progressive Enhancement, where the community can find out about interesting developments and events surrounding this concept.


Deep Work, by Cal Newport

I would wholeheartedly recommend this book to anyone who struggles to stay focused on their work right now. Cal Newport analyses and coins the term of deep work, a skill that is mastered by only a few nowadays, but is of extremely high value in our economy.

As a final year undergraduate student, I could easily see how deadlines for projects, exams, as well as my dissertation were all approaching and simply could not feel that I was productive and effective in my work. They all require complexity and a lot of attention to detail in order to be high quality A-grade projects. While working, I would easily get distracted by social media and email notifications, unimportant tasks I scheduled without actually putting much thought into them, web surfing, all of which only made me feel bad at the end of the day, knowing I could have made more progress on my projects.

After looking at a TED Talk on Presence from the incredible Amy Cuddy, I looked for her book on Amazon and run into this one. I knew immediately it was exactly what I needed to get back on track and exercise my focus while working. The entire book is based on the hypothesis the author formulates in the beginning:

The ability to perform deep work is becoming increasingly rare at exactly the same time it is becoming increasingly valuable in our economy.

The book is structured in two parts. In the first section, he explains how the concept works and then, goes on with advising on how to approach this style and integrate it into your lifestyle, depending on your type of job and your priorities. The advice is very actionable, as I already started for a week now to include deep work in my routine. I became much more organised and productive.

I wake up daily at 7:15 am, have enough time for my coffee and breakfast in order to be able to start working at 8 am sharp. I can easily combine deep work for my projects and research, University classes and going to the gym daily, scheduling any other commitments I might have on a typical day. If I have to travel, I try to use my time as efficiently as possible, reading a book on a technical topic I am interested in or save articles for offline view, which are related to my projects.

It is also definitely motivating to read all the case studies he presents, understanding how people who manage to stay focused for longer periods of time on certain tasks perform much better, compared to those that constantly multi-task, switching their focus every 30 minutes or so. In my case, at least, I feel I am much more productive when I am surrounded by complete silence, so that my brain can only focus on the task it has to do, really well. I was amazed by how many tasks I crossed out of my to-do list in just 3 hours of deep work. If I were to do the same amount of tasks with my normal approach towards work, it would have probably taken me 3 days to complete them all this thoroughly. Interruptions are dangerous.

It might feel a bit tiring the first few days, while you train your brain to work this way, but believe me, the results are amazing. Our willpower is limited and as a result, we must not stretch it to its limits. It is much better to incrementally train our brain to deeply concentrate while working, so that it will automatically do so over time. I normally finish everything I have to do by 5 pm so that I can enjoy doing some exercise, reading something, talking to my friends or watching a funny tv show afterwards. I could also catch up with my family over Skype, write a new post on my blog or catch up with the news on my Twitter feed.

I could go on writing for ages about this concept and how valuable I find it, but I hope I sparked your interest in reading the book. So far, a fellow colleague of mine already bought the book, 3 days after I told her about it.

On an ending note, we should not forget that our time is our most valuable asset,  so we should definitely consider what we invest it into, to make sure we get the best return on investment.


Video Content for my Final Year Project

So, I am creating a prototype of a MOOC platform, I am addressing online video streaming optimisation for central Africa. So far so good. But what about the actual videos? I can’t download or copy courses from other higher education institutions or MOOC platforms. There are copyright issues and I wouldn’t like to upset anybody. I’ll definitely won’t be able to create good content myself. It will be beyond the scope of the project and quite time consuming.

Thus, to make things easier, I have been advised to use our own University’s platform for video lectures, UniTube. It is not very well advertised at our University so not many students/lecturers know about it. It is maintained by a British company called Planet eStream, specialised in content sharing for educators. The UI of the platform does not quite belong to this century, but that doesn’t surprise me.

Screen Shot 2016-02-10 at 15.05.48

Maybe as it will become more frequently used as part of the courses, it will benefit from a re-design. I am sure a lot of students in Web Tech will take on the challenge. So far, as I browsed through the entries, lecturers have uploaded screencasts, documentary recommendations or even their own lectures. I think the University is currently putting a lot of effort and resources in equipping most of the classrooms and labs with video recording systems. Probably in 1-2 years, students will not only have lecture slides posted on our learning platform, but video recordings as well.

Some lecturers do have mixed feelings about this, and it’s understandable. They are afraid attendance will drop significantly if all lectures will be recorded. But one of the students came up with a great idea. Only allow access to video recordings for students that attended the lecture. Easy peasy.

But, coming back to my project, the first video file that I’ll test in my development is one of my lecturer’s screencast from the Artificial Intelligence course, where he explains the use of the itSIMPLE tool for planning. As soon as that will work as expected, I will try and retrieve some more relevant and good quality content from the platform to add it to my prototype.


Final Year Project Poster

Hey everybody! Happy New Year! This year starts with the poster presentation we’ll have at University on the 14th of January. For it, all final year students had to prepeare a poster for their project. Below, I shared with you my poster, which may suffer minor changes by the time I’ll actually present it. When I structured it, I tried to keep it simple, visual and present the main ideas in a succinct manner. I hope I got it right and I’ll try and perfect it in the next couple of days.

 FYP_PosterUpdate: Following the poster session, I received some very good feedback. First of all, I will use the University’s video platform for lectures, UniTube. It will help me with the key aspect of the project: content with no copyright issues. Secondly, in encoding my videos, I have been advised to omit the HD encodings, since there is no reason to send over HTTP segments that have ultra high definition, for the purpose of speeding up the streaming process. In terms of testing the project, people told me it would be a good idea to speak with some NGOs that work in this field in Sub-Saharan Africa and may help me with identifiying possibly useful features that I’ve missed.

Now, as soon as I’ll finish the requirements and design stages, I’ll start with developing the video player using Dash.js. Can’t wait!


HTTP Video Streaming

As I have mentioned in my previous post, my dissertation will focus mainly on optimising online video streaming for slow Internet connections. During the first week of December, I finalised my literature review section and I had the chance to learn more about the process of online video streaming. I have also included in my study a detailed description of current initiatives in central Africa for online learning and did a case study on technologies used by Coursera, one of today’s largest MOOC platforms.

I’ll try to describe briefly in this post my findings related to how HTTP video streaming works. First of all, to stream data in general, either video or audio, means to start processing the data you receive, even before the client receives everything. Most commonly, video files are large and it would take probably minutes, depending on the download speed of your Internet connection, until you would be able to play the video and watch. Not a very good experience on the user’s part. Therefore, improvements had to be made.

What is now most commonly used is a protocol of adaptive bitrate streaming. The most popular and effective one at the moment is DASH, which stands for Dynamic Adaptive Streaming over HTTP. DASH was born out of the need of standardising online video streaming. Multiple HTTP delivery platforms, such as Microsoft Smooth Streaming, Apple HLS or Adobe HDS, posed a lot of challenges for content delivery operators that had to resolve multiple compatibility issues. In 2009, the MPEG (Moving Picture Expert Group) decided to develop an industry standard for HTTP streaming, so 2 years later, the DASH protocol was created. The DASH Industry Forum, supported by large technology companies, have developed the Dash.js JavaScript library in order to enable the development of video players that support this standard. Large on-demand video providers such as Netflix, YouTube or Hulu have already used this JS library and adapted it to their platform’s needs.

When DASHis used, the video file is segmented and each segment is saved under multiple resolutions (audio only, 240p, 360p, 480p or 720p). The client can then opt for a better quality, if bandwidth permits. As soon as it senses the Internet connection slightly improved, it can fetch a better video quality segment and adapt again, if necessary. Data is dowloaded through the Transmission Control Protocol (TCP), a much reliable alternative to the classic User Datagram Protocol (UDP), which does not benefit from package retransmission mechanisms. Encoding.com has a very good visual representation on how it actually makes this selection process, depending on the mobile Internet connection:

HLS_timestamp1

The Dash.js library makes use of the HTML5 video tag implementation and is on open-source library with great support from large technology companies such as Microsoft.

In terms of online video players, we can put them in 3 main categories:

  • Microsoft Silverlight
  • Adobe Flash Player
  • HTML5 players

In order to use the first 2 ones, you need to install a plugin into the browser in order to view the video content. With HTML5, however, everything is built-in, as your browser already knows what the HTML5 video tag means and is supposed to do. A lot of HTML5 video players are “out there” on the market, with an Adobe Flash fallback. One just player, especially developed for adverse (mobile) network conditions is the Bitdash. The Austrian company invests a lot in research, implementing the latest developments in online video streaming into their DASH-MPEG player.

I hope this post was informative and hopefully not too long. I will expand more on how online video players work in a future post and decide on which one I will use for my final year project. For the time being, I will focus on developing the project’s poster for our presentation session on the 14th of January.

 


My Dissertation Project

September 2015 is here, which means only one thing : there’s no more time to waste, I have to start working on my dissertation project! I thought I should introduce you to my idea, so here is a summary of what I intend to work on for my final year project that has to be presented in May 2016.

I thought a lot about what I should focus on for my dissertation and one thing was clear from the beginning : I have to be passionate and truly believe in the topic in order to make sure I will develop the project to the best quality standards it deserves. Afterall, I’ll dedicate an entire academic year to one main project, so I was afraid of losing my interest halfway through if I didn’t care about the idea wholeheartedly.

With that in mind, I came up with this great project idea that checks all my criteria!

  • It is my strong belief that technology should be developed having a useful purpose. Whenever I will develop something, it has to benefit society or its particular set of future users in a meaningful way.
  • I also wanted it to combine theoretical topics of research with practicality.

The conclusion led me to a topic I deeply care about.

I will develop a prototype of an online MOOC (Massive Open Online Courses) platform created especially for countries in central Africa. I believe higher education there should be encouraged to grow, as it can radically benefit the society to develop itself. The countries need truthful leaders and local business people to help grow the economy and offer true role models for the younger generation. The most important benefit of the Internet, in my opinion, is that it reaches billions of users. It breaks distance barriers. With my project I want to show that you can create and offer students personalised courses from renowned Universities that address their exact needs and bring them the knowledge they need in order to grow. All of this at no costs and optimised for mobile devices that are now rapidly adopted in these countries.

I will make sure I’ll keep a log under the #dissertation16 tag on my blog of all my interesting findings over the year and show you how the project will take shape. I am open to any of your comments and I would be more than happy to hear back from you if you have any tips or advice related to MOOCs, e-learning and the need for education in central Africa.