Tuesday, November 10, 2009

Use SearchControl and SearchForm of Google Search Ajax API

When using the Google Search Ajax API, we may sometimes what to add functions when user initiate a search or clear the search result. the SearchForm is designed to do this. It provides .setOnSubmitCallback(object, method) and .setOnClearCallback(object, method) to let developer define method to call when user search or clear search result. But can we use the SearchControl, which provides easy-to-use and nice wrapped searcher, along with the SearchForm? The answer is yes. Here is the example code of using them together.
function OnLoad() {
var searchControl = new google.search.SearchControl();
searchControl.addSearcher(new google.search.WebSearch());
searchControl.addSearcher(new google.search.ImageSearch());

var drawOptions = new google.search.DrawOptions();
searchControl.draw(document.getElementById("containerResults"), drawOptions);

var searchForm = new google.search.SearchForm(true, document
searchForm.setOnSubmitCallback(searchControl, function() {
searchForm.setOnClearCallback(searchControl, function() {
searchForm.input.value = '';

The HTML code is

< id="searchcontrol">
< id="containerResults">

The point to collect this two part together is drawOptions.setSearchFormRoot(document.getElementById("searchcontrol"));, which tells the SearchControl where to look for the associated SearchForm. Another thing to notice is that, in the SearchForm.SetOnCallBack, the execute() method to call should be the one of the SearchForm, like this.execute(searchForm.input.value);, not the one of the SearchControl (searchControl.execute(searchForm.input.value);), otherwise, the method set to OnClearCallBack will not be called when user clear search result.

Things to notice when testing form on Wicket

Wicket, a Java-based web application framework, provides powerful tester to test the application. The tester can simulate activities of users on the web application. The form tester is a component of the tester, which use to test form behavior on the page. However, there are something one need to notice when using the form tester.

Let's look at this example.
We test a form in the Checkout page, which looks like this

Zip code

The test code is
FormTester formTester = tester.newFormTester("form");
formTester.setValue("name", "Philip");
formTester.setValue("street", "Main Street");
formTester.setValue("zipcode", "96822");
formTester.setValue("city", "Anchorage");
formTester.setValue("state-wmc:state", "Alaska");

But when run the test, the last assertion indicate that it still in the Checkout page. Did you find anything wrong? The problem lies in
formTester.setValue("state-wmc:state", "Alaska");

And the fix is

formTester.select("state-wmc:state", 1);

The reason is that, FormTester.setValue() method cannot be used to set the value of selection field, thus the "state-wmc:state" field is not filled, so when the form is submitted, an error of invalid value occurs. This is an usual mistake when using FormTester.

Additionally, in order to find out what goes wrong when the test fail, we can use the Tester.assertErrorMessages(String[]) to see the error message.

Monday, August 24, 2009

Final Summary of GSoC 2009

As the fall semester 2009 begins, the GSoC 2009 goes to an end. During it I again successfully complete the project to the excepted goal.

At the beginning of the project, problems of the Issue sensor and its data structure design draw me quite back from the original schedule. However, after I settle it down, the further process become amazingly smooth. It took me like 2~3 weeks to finish Issue DailyProjectData analysis, and another week for Telemetry, and, several minutes to put it into Software ICU! I again experience how a well designed extensible system can boost new developments upon it. Though I am the main developer of the Software ICU, I really have not much to do in previous development of DPD and Telemetry, so it is not my previous experience lead to the boost development.

To add Issue analysis to Software ICU only involves editing a single configuration XML file(I was adding to the default configuration, but it is OK to add to your own configuration, if the developer did not upgrade the code in time =P). The main thing to newcomer may be to find out which file to edit and where it is. I just found that there is no documentation about it on hackystat-ui-wicket. I will add it in soon. Another maybe issue is that, none of the current stream classifier looks to fit the conception of Issue analysis. On the other hand, I do not yet have clear idea of how the Issue stream should be indicate the performance of developers. This can be one of the future study of Issue analysis.

In conclusion, last year, I followed the path of extensible system design to build the system. This year, I experience the benefit of building a system that way, and I am so thankful that we have done it that way. I am pretty sure this will be the way I keep following from now.

Monday, August 10, 2009

Almost done in summer project

Till today, Telemetry analysis of Issue, and the Issue DailyProjectData page in Project Browser is finished. As the way Telemetry service and Telemetry page in Project Browser implement, the new issue telemetry chart will just show in Project Browser and no modify needed. That means most of expected feature of summer project is done. Only thing not yet finish is put Issue analysis into Software ICU.

What I need to do now most is to put Issue sensor in use in Hackystat's hudson service, so that daily issue data will be collected. When Issue Telemetry analysis available, Software ICU can easily utilize it by modifying the configuration xml file.

We are in good shape here.

Monday, August 3, 2009

Experiencing Telemetry

Work of the past week is on development of hackystat-analysis-telemetry, to include telemetry analysis of Issue data into Hackystat system. Deal to the well constructed system as example and the sufficient documents, it is again quite straight forward to the goal. However, during coding, I found the telemetry language is kind of weird.

The language defines Telemetry Charts using some redefined language components, such as Reducers, which generate stream point value from DPD analysis, and axis. In the Chart definition, after define the name and parameters, the kernel part will look like this:
chart Issue(member, status) = {
"Issue invocations",
(IssueStream(member, status), yAxisZeroBased("Issue Count"))
Then we need to define the Stream called IssueStream, and after parameter description, the kernel part look like this:
streams IssueStream(member, status) = {
"Issue counts for the given mode for this Project",
Issue(member, status)

The definition of Streams seems redundant to me because it add no additional information to the reducer. So why not using the reducer directly in the Chart definition? I think it will be cooler that the telemetry language can directly use normal reducer like DevTime to generate the member level chart, so that no need to write in Java the member level reducer again.

Now I am coding the test cases for the Issue Telemetry analysis, need to make up Issue data again.

Monday, July 27, 2009

Finish milestone to DPD

It is a critical week of the hackystat issue project that the functionalities from issue sensor up to DailyProjectData are ready to release, along with updated documentation found in
Ant Task Reference

The new build of Hackystat including these feature should be release very soon. And it will be deployed in our daily hudson build to start gathering issue data.

Next setp is to implement Telemetry analysis. Then put all these things into Project Browser, including Software ICU of course.

Tuesday, July 21, 2009

DPD almost done

Issue analysis for DailyProjectData is almost done. The process is quite straight forward with the existed analyzes as examples. There is nothing interesting to mention in my opinion. It contains a total count of open issue on the dpd data for convenient.

When coding for DPD, I actually found some bugs in issue sensor. When testing the sensordata parser, I realize I can makeup some issue instance as CSV for testing. It should be used in tests for issue sensor as well.

Just keep working.

Tuesday, July 14, 2009

Moving step forward

After a week's waiting, my little MacBook Pro had come back home on Sunday safe and sound, did not lose a single piece of useful data. As soon as I got him back, I start finishing up the issue sensor to ready for commit.

After a serious consideration, I decide to discard the RSS feed as data source completely, because firstly it does not provide much more useful information than extract data from issue summary table, secondly the RSS feed service seems more unstable than the issue tracking system,which make the sensor unstable as well, and thirdly it make the code more complex and hard to validate data completeness. My opinion is, if the user really don't want to miss a single update information, just run the sensor as often as he like.

I also found it kind of hard to test the issue sensor, because it is almost impossible to have a fully controllable and repeatable test environment. The content of Google Project Hosting will keep changing. For current state, I test it by run it twice, and make sure the first run generate some issue data, and the second run did not detect any changes. The better way to exam if the sensor working properly is to actually use it. So my job this week is to work on DailyProjectData to use the issue sensordata to generate issue DPD.

Monday, July 6, 2009

Commit Early, Commit Often

I just get a painful lesson of the Commit Early, Commit Often principle. The thing is, my laptop (MacBook Pro bought at 07) died on Sunday. I brought it to Apple store and lucky I can get it fix for free, which will take 1 to 2 weeks. However, the code of the new version of issue sensor is still lying on its hard drive, and the data is not guarantee to be undamaged. That means my work for 3 weeks are gone, at least for the coming week or two. In fact, functional code is finished already. I was just waiting for completing the unit tests before commit the code. I am now so regret that I did not commit the code early.

Thursday, June 25, 2009

Issue Sensor Redo

Progress of issue sensor is behind expectation. After investigated SensorBase's mechanism, I started to redo the issue sensor. At first, I just want to modify the original code somehow to make it a singleton. However, I kind of mess it up and finally almost redo it from scratch. It is almost finish and just need some unit tests before commit the code.

The sensor is more difficult than I image, because it now actually do some of the analysis job during sensor data collection. In order to preserve as much information as possible, the sensor take not only RSS feed, but also information from issue tracking system directly. The Google Issue Tracking System provides a RESTful link to the issue summary table in CSV format. This gives information include almost as much everything as extracting from individual issue html page. I only keep the field interesting, they are id, type, status, priority, milestone and owner. Here is how the sensor do its job.

Firstly, it retrieve all issue sensor data from sensorbase (It would not be too much because only on piece of data per issue only.), and also get the issue summary table. Then match each issue from the table to a sensordata. If a issue does not have a sensordata for it, it will create a sensordata for it. Secondly, it check the RSS feed, and add update information to the sensor data. The field names, such as id or type, are used as property key, and the property value consists of the field value as well as the timestamp, collected with "--". Finally, new and modified sensor data will be sent to sensorbase.

Monday, June 15, 2009

Cannot put all data search in database

Issue Sensor Data

The issue sensor data now is decided to use single instance per issue design. The owner of the sensordata, which is the major problem of the approach, is set by the issue sensor, before we can come up something better to decide the data owner. We just want to move on to the core part instead of sitting there thinking about the data owner. That is the easiest way, but require most attention of users to ensure things work good, so sufficient documentation may compensate somehow, hopefully.

New API to SensorBase

When coding the issue sensor as single instance per issue, it is important to get the data of the given issue efficiently. While the issue is identified by its id (number id usually), and that id is stored as a property in the sensordata, it will be nice to extend SensorBase's API to return sensordata which contains a given property with given value. So I start to try to do this. Unfortunely, I found it is not easy to accomplish, and should not be include to SensorBase's functionality.

Firstly I study about the mechanism of its API. I thought the API will look like http://hostname:port/sensorbase/sensordata/user/timestamp/?sdt=sensordatatype&propertyname=propertyvalue. It is actually possible to use dynamic property name, by using "{propertyname}={propertyvalue}" in route definition. Then in the resource, just get the property name and property value as two Strings, just the same way as usual property.

Secondly I try to extend the database interface to allow query sensordata with given property entry, when I found the current tables in database does not support this query. In the sensordata table, the values are (Owner, Tstamp, Sdt, Runtime TIMESTAMP, Tool, Resource, XmlSensorData, XmlSensorDataRef, LastMod), and the property list in stored in the XmlSensorData, which is a XML representation of the sensordata. In order to get the data, I need to query the XmlSensorData. However, in order to get a set of XmlSensorData, I need to first get their XmlSensorDataRef, then use that to query the XmlSensorData(It is not the only way to doing this, but it is the convention of SensorBase, probably to make it easier to separate data instances from query return.). When I get the XML data, I parse the XML and extract the property list, then do the comparison. Then this API query will return the XmlSensorDataRef. The user actually need to retrieve the data with the data reference again from SensorBase. As you can see, there are duplicate query from reference to data instance. That's because the SensorBase is trying to do the things that suppose to be done by data user. The most reasonable way is, the user get the data from the data references, then do the comparison to get the data he want by himself.

Therefore, the issue sensor will just get all the issue data(with the sensordatatype = issue), then compare the issue id to get the one it need to add data in.

Monday, June 8, 2009

Stuck in Issue Sensordata Design

Issue sensordata is so different from other sensordata that it does not conceptually belong to a particular user. Instead, it belongs to a software project, but this project is not the same concept of project in Hackystat system. In Hackystat, a project is just a definition to group up users and data to represent a actual project, however, there does not necessary exist an associated actual project. The problem is, all Hackystat sensordata belongs to a user. We have to decide a user to process the data, and that user has to be sure to stay in all the projects which the issue may belong to for the whole life time of the projects. This is like a administrator in the project. But this administrator need to be managed by users, not the system. There is not an administrator defined in Hackystat users, and we dont want to make this exception for just one kind of sensordata(it is not necessary for others).

The first design is to store changes of an issue, starts from the creation of the issue, followed by its updates. Then the issue sensordata is assigned to the owner of the update/creation time. The major resource will be the RSS of the issue updates. The good of this is that it keep all the update in the help of RSS, and the owner of the data is reasonable. But the shortage is that the RSS provides limit information. In the creation thread, it only include the comment. In the update threads, it only include the state/labels that being changed. The current unchanged state is unknown from RSS and has to be found out from the issue tracking system(via http in most case). Another problem is when analyse the data, all data started from the project start time have to be gather together to get the view of the given time. It might be a lot computation if there is lots of updates.

The second design is to make a single sensordata associated to a single issue. The updates will be store in the properties list of that sensordata, from the same data resource: RSS. In the creation of an issue data, the current state/labels will be extract from issue tracking system, then it is easier to keep track of future changes. Also, it is easier to analyze, only need to go through that single data instance to figure out the state of a given time. However, the problem of this design is the owner of the sensordata, because it has to know the owner to get the data, and in project level analysis, that data owner has to be in the project which the issue should belong to. There is no a reasonable way to answer this question without making some hack or modifying/adding current system definition. It is possible to let user define who the data belongs to, but it is unsave because it require not only the user know excatly what he is doing, but also all sensors collecting data for the same project need to be configure excatly the same. Otherwise, there may exist mutilple copies of the data instance, which is a great fault of the single data assumption.

Thursday, May 21, 2009

Summer Plan

Google Summer of Code
During the summer, I will be doing the Google Summer of Code 2009. My project is to add Issues into Software ICU. The work include from collecting data from issue tracking system to produce end analysis to Software ICU. The data collection part is finished during spring. So the plan for this project is:
  1. Review the issue data sensor and install it to Hackystat projects
  2. Write DPD analysis for the issue analysis
  3. Add Telemetry streams about issues
  4. Add Issues to Software ICU
  5. Revise the system from head to toe
I expect two weeks for the last three tasks. That is 10 weeks. Google Summer of Code last for 14 weeks, so I will have 4 weeks to spare to wherever needed or to catch up in progress if any delay. For the midterm goal, The first two tasks should be finished and the third one should on the half of the way.

Master Thesis
The thesis is another important task for me this summer, probably the most important one. There are six chapters in my thesis. I might finish each one in one to two weeks. So it will take probably two month to have it done. During this, Philip will revise each chapter for me after I finish it. And I will go over it on the fly. If everything goes as planned, my thesis will be finish before August.

The first chapter I will work on is the related work. I am looking for more relative research on empirical engineering to fill up this chapter.

Monday, May 11, 2009

Summary of Spring 09 Semester

Brief Summary
The most of work relative to my thesis research or Hackystat was done in the first half of this semester. As the semester went on, I began to stuck in the two course I took, ICS606 and ICS621. The homework and projects began to accumulate and took me lots of time to accomplish. Thus, my total research and development output is somehow lower than previous semester, and the progress of my thesis is behind my expectation.

Achievement in Spring 09

Tech-report of Hackystat Classroom Evaluation in Fall 2008
In fall 2008, we deploy the Hackystat system in class projects of ICS413, together with a questionnaire survey near the end of the semester. Additionally, we gather log data of students' usage of the system. At the beginning of this semester, I started to review the result from the survey and analyze the log data, then wrote a tech-report of this evaluation. While the major component of this evaluation is the Software ICU, most of this tech-report will be able to go into somewhere of my thesis.

Seminar Presentation of Software ICU
This is one of the most important step in the progress to my thesis. It is a good chance to summarize the system. The slide can be found here. I spent a little more than a week to prepare the presentation. That was quite challenge to me because it was my first time to present to dozens of audiences. But the presentation came out to be a successful one. It really encourage me a lot.

Hackystat Manual Sensor
In Spring 09, my major development contribution to Hackystat is the manual sensor, relative posts can be find here, and here. It is a Java Swing application that let user manually input data to reflect their development activities which are not yet have Hackystat sensor attached. The current version is just as simple as a plain form plus a raw data viewer.

About My Thesis
In the last week, I have setup my plan to graduate in fall 2009. That means, the due day of my thesis will be sometime in the middle of October 2009. But I prefer to finish the stuff earlier, just in case accident happens. My plan is to finish my thesis before Auguest, and then defensive it in early Sepetember.

Here is the draft of my thesis, which would like to be considered as the technical report for this semester's independent study.

Thursday, April 30, 2009

Report on Agent Development Stimulation Platform

In the Assignment 3 of ICS606, I was supposed to install the Robocup Rescue system and do some experimental agent development upon it. However, even compile the system came out to be nothing trivial. The system is written in both C/C++ and Java. The latest version is back to Mar 2007. They use both GCC and JDK to compile the system, but since their latest release, both these two compilers have updated quite a bit. As a result, the code cannot be compiled using the latest version of the compilers. Moreover, as although the system is claimed to be platform-independent, its makefile do use some parameter that is not available to Macintosh (like -soname), and the source code contain some function that has no Macintosh version(like pthread_yield()). The only guaranteed is Unix, in the state when their latest version released.

I have tried to compile it in Mac OS X 10.5, Windows XP SP3 and Ubuntu 9.04, with the latest compilers. None of them has any luck to pass the compiling, and the error given in these three platform are different.

In order to successful compile the system, the user have three choices. The first way, also the hard way, is to fix all the compile failures to match the compiler of the user's enviornment. The second way, which is theorictical, is to configure the compile system to compatible to the state of the release day. The third, which is somehow tricky but most easy way, is to install a stand-alone Linux using the old build. The easier way is to install that in a virtual machine. It is proofed to work to install the Ubuntu 6.10(Edgy Eft).

On contrary, the Robocode, which I got to know from a classmate's presentation, is much user-friendly. The procedure from downloading to start using it is quite straight forward. The installation package execute correctly and I can start writing my own agent in 10 minutes!

In conculsion, the Robocup platform is ill designed and extremely user-unfriendly, while Robocode is a much better platform for agent stimulation and easy to use.

Monday, April 13, 2009

Boswell continue?

Last week I started working on the course project of the agent class. The project I pick is based on Hackystat, and is a continue study of an existed topic: boswell/tickertape, a auto blogging agent for software development. While working on it, I am kind of lost my way. There is no directly related research about that. But there are many likely related fields, such as language processing, knowledge base, etc. The search field is so large that I am lost inside it. All potential related fields are so big to understand in short time. However, without understanding, it is hard to tell if concepts and technics from it will be useful or not. Currently, I just put my hope in language processing. There is some study of it and also using knowledge base concepts. I hope I can get a breakpoint from there.

This research is not directly related to my thesis. But it is an interesting research of Hackystat. Hopefully I can end up with some useful insight after finish the course project, if I can indeed finish it well....

Monday, April 6, 2009

Stuck in course work

Another week done nothing about Hackystat nor thesis... But it was a busy week because I was stuck in my course work during the whole week.

In the first half of the week, I was preparing the midterm of ICS606 Autonomous Agent, which is on Wednesday. After that, I switch to the assignment of ICS621 Analysis of Algorithm. The assignment is to design an algorithm to solve the power grid load shedding problem. It is an NP-complete problem, and it is about directed weighted graph. It took me the whole week thinking of a better algorithm. Then at the end it came out to be worst than the brainless enumeration. That is quite frustrating.

This this week, I will have to start the almost-forgot course project of Autonomous Agent. The project I choose is to further develop the Tickertape in Hackystat project, to grant it more intelligence to act as human beings. The first step will be constructing "knowledge" about user's developing behaviours from hackystat sensor data. The hard part is how to design the structure of that knowledge in a way that it not only has good representation power, but also facilitate to generate human language sentence. I am not sure where to start, design the knowledge first then the way to express it, or the opposit way.

I hope I will be able to finish the course work faster and left some time for my other jobs.

Monday, March 30, 2009

Taking a break and gsoc 2009

As the name tells, I took a break during spring break. =P

Ideas of Google Summer of Code

In the last meeting with Philip, he provides me an idea of utilizing Issues data into Hackystat, from the bottom of collecting and sending data to the top in Software ICU analysis. In the middle there are an associated DPD analysis and a set of Telemetry analyses. The Issues data is an essential sign of health of project management, but long missed in Hackystat version 8. Additionally, this is a good chance to work all through the system, which I did not experience yet. This proposal contains enough work for 3 months, so I used this idea in my application to GSoC 2009.

Before that, I have another idea: provide a facilitating way to deploy service of Hackystat. The first part is easy way to deploy sensorbase upon various database implementation. Currently, users have to implement the interface in Java code by themselves and complie the system. I want to find a way to separate the database access part into a standalone componenet that user did not have to complie sensorbase everytime there is update of sensorbase. The database access component requires much less update in functionality than sensorbase, thus less, if any expect bug fix, update/recomplie required. I may also do the implementation for some popluar database such as MySQL, IBM DB2, Oracle and MS SQL Server. The second part is an administration tool to lauch Hackystat service such as sensorbase, dpd, telemetry and projectbrowser. It provides GUI to configure settings, and capability to hide the command-line windows, make Hackystat runs in backgroud like most system services do. I did not familiar with either database implementation nor GUI configuration building. Thus estismated work is unknown for me. I expect more time in research that coding to accomplish this.

Monday, March 23, 2009

Revise my thesis

State of manual report tool

The manual report tool is finished for basic but complete functionality. The UI is similar with the figures post before, except that the "Labels" field is removed and labels is expected to input in "Resource" field. This is actually the way to construct the resource field in sensor data. Making it this way makes user less confused when they see their "labels" shown in the resource field in history panel.

As discussed before, history panel manipulate sensor data piece by piece, no grouping function is provided.

Revise my master thesis

I start revising my thesis from the "portfolio" version to a "software ICU" version. The introduction is being totally rewritten to introduce the idea from different angle. I am reviewing recent tech-reports about Hackystat and software ICU, which including 09-02, 09-03 and 09-07 to get ideas of the start point.

Most of other parts, such as the related work section, can remain the same.

Monday, March 16, 2009

Self-report tool is ready

After a little talk to Philip, I realize that showing the raw sensor data may even better than grouping them into events, because in that way we can give user more control over their data. The initial motivation of add that "event management" feature is in case of editing data. However, for the near goal, we will not provide the editing feature in the manage panel. If user think there is error in their input, just delete the data and resend it again. When managing the raw data, there is a possible trick to shorten the time by delete some of the data. The other reason I wanted that grouping feature is trying to make sure users will correctly delete their data, not leave unexpected data in sensorbase. However, in either way, users have to take responsibility to their data. The additional feature will not ensure they will make less mistake than without it. In contract, if the additional is less robust than it appear, users might pay less attention to their data's completeness than they will do with raw data, which may lead to more defect data. Therefore, after this second thought, I decide to leave the responsibility of the data's completeness back to the owners before a powerful and reliable enough manager exists.

Plan for this week is to finish the self-report tool. Everything is ready actually, just need more test of sending out data. After that, thesis!

Monday, March 9, 2009

More analysis on log data

Accomplish of last week
1. Tech-report of Hackystat evaluation 2008 is revised. More analyses of data on per student bias are added in order to gain further insight into the logged and survey data. It is interesting to find that some students' responses are far from their reality: the use frequencies they claimed are much lower than the actual frequencies logged by the system. Though it is hard to verify if the error is intended or not, it does reveal some interesting point from their answers to other question: those students are the ones who concerned with sharing their data, especially DevTime and Commit. It is reasonable to infer that they were not happy with the data showing their laziness to their teammates.

2. All functionality for initial release of the self-report tool is complete. Currently the application takes a time period denoted by start and end time, resource file, tools and label to generate the DevTime data. The UI is shown below.
Also the application is capable to manage the data you sent. It is able to retrieve data in a given data, filter the data to a given tool if available, show it on a table, and allow user to delete data from it. The following image shows an example of this feature. The image is captured on the experimental implementation that the data is not filter to be self-reported only, which will be implemented in actual release.
The current state of the manager is actually not quite useful, or even annoying. This is because the approach of devtime sensor data: a piece of data make a five mintes period as active devtime. So for every event that user reported, there is not a single data relative to it, instead, there are a set of data, each departed by five mintes, to present the length of the action period. When editing, these pieces of data are shown and treated as separated instance, and are deletable separately. This does kind of distrupting the concept behind the sensor data. In order to avoid this, the data should grouped by events and managed as events. This will be the next priority of this project.

Plan for this week

I have a paper presentation on Wednesday morning, that will probably take all my time till then. But after that, I can have a small break from my course work and put more time to the lab projects and my thesis. So here is the plan for this week:
  1. Continue the self-report tool. It is very close to it designed funtionality now.
  2. Revise my thesis.

Monday, March 2, 2009

Presentation Finished!

The biggest this for me last week is the presentation of my thesis project on seminar course. My slides can be found here. It was the biggest presentation I ever made and make me nervous to dead because I did not get much time to practice the talk. Fortunately it came out to be a successful one. I was planning to revise my thesis before start preparing the presentation but I found I was running out of time. So the thesis is still untouched yet. As discuss with Philip and Robert, my presentation did not include enough information about the evaluation result. I was thinking adding more to the slide but the result is too literal and long that it is hard to summarize good with a few slides. And insufficient analyze of the logging data is another reason of this difficulty. From Philip's inspiration, I started making more analysis charts over the data and they do reveal some additional information. They will soon be added to the evaluation tech-report.

Plan for this week:
  1. Finish further analysis of the data and add the new result to the evaluation tech-report.
  2. Finish the first release of self-report sensor. The remain thing for initial release is the ability to delete self-reported data. This is most involved with displaying an inner data object with JTable.
  3. Revise my thesis. It is long postponed and should be caught up sooner. I still want to finish the thesis before submission deadline in order to keep the initiative of my graduation.

Monday, February 23, 2009

Common Lisp to my thesis

During the last week, little work is been put into neither the self-report tool nor my thesis, because most of the time is put on the assignment of ICS606 Autonomous Agent. This assignment took me a whole week because it is based on Lisp. It requires to enhance a vacuum agent which is written in Lisp. Though I had taken the undergraduate class about Lisp, I were near to novice because firstly I did not experience writing functions, methods and structures in Lisp in that class, and secondly, I had nearly forgotten everything I learn in that class. It took me about two day to pick up Lisp and advance to object-oriented like feature. Meanwhile, I found the book Practical Common Lisp is pretty awesome, and it is available on web! It actually explain Lisp much better than the undergraduate class. After being familiar with Lisp, I did find Lisp is pretty good in program the agent because represent and interpret data is unbelievable facilitated. It gives more freedom than other language ever possible give, though it is sometime same as freedom to make mistake. =P

In this week, the priority is to revise my thesis. Actually the biggest task in this week is the seminar presentation on Thursday. Thus revising my thesis is one of the best way to coordinate ideas in the presentation. After not so many presentation in class, I feel my biggest problem is not practicing the start and detail besides slides. What I did in my previous experience is mostly like to outline the content to slides, and only review the detail over my head. When presenting, because of nervous, I often did not start the talk very good, that cause me even more nervous usually, and forgot what I had prepare in mind, then doing even worse. So I got stuck into a vicious circle and end up being reading the words in the slide. This time, I will trying to rehearsal some times before, in order to get prepared in both content and confidence.

Monday, February 16, 2009

Unit testing for Swing

Effort put in my thesis last week is under my estimate because the assignment of ICS606 took more time than I expected. I will put more work in it this week.

I have started the self-report tool of Hackystat using MiG Layout. I am yet paying much attention in the UI, so it is too early to said if the MiG Layout is perfectly fit my requirement or not. But so far, it is handy to manage simple arrangement of components. And its Quick Start Guide and Cheat Sheet are very helpful and intuitive.

When starting the new project, I also searching for unit testing package for swing. After some digging, I settled to a package called FEST. Its assertions are quite flexible and readable. And it is an active project that running for a long time. After some trying, I finish my first test case with no difficult barrier. Then I notice that, what the tester do is to stimulate mouse/keyboard action on a real Swing application. I can see what it is doing during the test. That is both good and bad. The good thing is I can see what's going on, and what's going wrong in the test. And the bad thing is, I have to stop to watch it. If I move the mouse during the stimulation, the test will probably fail. I will no longer be able to run the unit test and do something else while waiting it finish. However, I think of an interesting idea from it: it may be possible to make "demo" test cases, that can serve as an example of usage in supplement of wiki guide.

Monday, February 9, 2009

Slow progress

I am working on the self-report tool for hackystat(hackystat-sensor-manual). We decided to use MiG Layout for layout management. The project was just started, but progress is slow, because I am picking up with the Swing staff in Java, which I did not touch for quite a long time and never be very good at it.

The other thing on my priority list is my thesis. As the whole experiment become the classroom evaluation, my thesis topic therefore change from use software ICU to manage project and courage collaboration to use the software ICU to understand and teach software metrics. The principle is the same, just change the way to view and present it. The news is, the major part of my thesis is finished in tech-report of Hackystat classroom evaluation fall 2008 already.

Another waiting project is the SVN data collection service, respond to Google's new commit notification POST and generate sensor data. It will be a great service, but not in an urgent need because SVN sensor is now working well. But implement this new service will reduce the difficulty of setup svn sensor. And for users that don't use an automatic contiunous intergration engine like Hudson, they don't need to remember and run svn sensor very time after commit.

Plan for this week will be, build an intial framework of self-report tool, and merge the evaluation tech-report into my thesis.

Monday, February 2, 2009

More reports and papers

This is a busy season of writing reports and papers. I am finishing the tech report of Hackystat fall 2008 evaluation and will then back to my thesis. Philip is just finish the draft paper of Hackystat SOA and I, as co-author, am revising it.

The first revision of Hackystat fall 2008 evaluation is finished. It is even more interesting with the usage logging data. When preparing the analysis data of system usage logging, I found that Pivot table of Excel is a very powerful tool and quite easy to use. Once the data is correctly imported(key is to set correct seqarator), Pivot table will do all the statistic analysis and generate a very nice summary table.

Last week I was provided a project to create a tool for users to self-report their un-tracked developing events. It will be a client-side application based in Java. But I did not spend much time on research the JGoodies and MigLayout, the visual layout packages for Java application. But once I get familiar with the layout thing, the primal functionality of the self-report tool should be a piece of cake. For further enhancement such as keyword remember/auto-completion, application/tool auto-selection, and event auto-generation, there will be a lot more study.

Monday, January 26, 2009

On the way to the Evaluation Report

The first draft of Hackystat Classroom evaluation in fall 2008 is finished. Philip had proof-read once and give me feedback. I am now editing according to the feedback and finishing the future direction part.

This week I will start doing the analysis of the logging data collected during the evaluation period last semester. It will be interesting to compare it with the answers given by the students. This analysis and result of comparison will be included in the tech-report.

Meanwhile, there are still lots of learning for Latex to me. I am still quite new to Latex and not so familiar with it yet. One of my class's assignments are required to be written in Latex as well. There really a good time for me to practice it.

Currently, there is no planning for programming work. Project browser seems in quite a good shape now. Maybe after finishing the future direction part, I will do more improvement accordingly.

Monday, January 19, 2009

Plan for this semester

Now it is the beginning of the second week of this semester. In this semester I continue to work in Philip's lab and write my thesis for Master plan A. Beside the ICS700 and ICS690(seminar), I took another two graduate courses: ICS621 Analysis of Algorithms & ICS606 Intelligence Autonomous Agents. Both of these course seem to be a busy class. The former one requires lots of paper writing, which may be good for me while I am writing my thesis. The later one will include quite amount of LISP programming, which I am not so familiar with. That seems to be a very busy semester.

Plan for RA, research and thesis

The first priority will be finishing the tech-report of the classroom evaluation of Hackystat in Fall 2008. Over a half of it is done. It should be finished in a few days.

The second priority, which will soon become the first, is writing the thesis about Software ICU. There are several tasks in it upon finishing the tech-report:
  1. Process the usage data collected in the last semester;
  2. Literature review on related work (software metrics, single and multiple software project management, software project portfolio, etc);
  3. Review the former study of Hackystat such as Telemetry.
Other things related to CSDL research:
  • Philip has begun a new project called Devcathlon, which is a game about software development. It will be an interesting project to work on;
  • I also want to make some improvement to Hackystat system, it includes:
    1. Enhance performance, java-based modularity system like Equinox and OSGi.
    2. Enhance the loading process panel. Current implementation is to verbose. I want to make it more concise and/or use other approach such as loading bar.
    3. Make improvement to let portfolio page better suit to the 9-LCD workspace. One idea will be make the input panel hiddable and introduce auto-refresh feature, but it will better with the performance enhancement.