Training missions, version 2
The OpenHatch training missions are a very popular way to learn the skills it takes to contribute to open source software. In the years that have passed since the first version was deployed, we've learned a lot; and this summer, you can work on that!
The current framework is limited by the fact that one must write new Python code for each training mission, and that forking the training missions is complicated. There is a new open source online learning framework called oppia that we could plug in. It would give us all the following advantages:
- The training missions could be seen, and edited as text, right on the web.
- Training missions can be forked by individual open source projects who want to customize the missions for their own purposes. This will allow communities to create (for example) a git training mission that suits their precise workflow, based on a high-quality starting point.
Your work this summer would be primarily in the code. In particular:
- Database engines: Oppia is written for Google App Engine. You would work with one of its main authors as well as a mentor within OpenHatch to make it able to run on non-App Engine data stores. (Probably, the best way is for Oppia to provide support for pluggable storage classes.) You would need to make it compatible with Django models, so it can run on the OpenHatch deployment.
- You would need to port the existing training missions to this framework. This would require careful consideration; but the result would likely be less code!
- If there is time, you could build up to 3 new training missions with this new framework. IRC, Mercurial, and joining-a-mailing-list seem like the most important three.
To get started, try one of the existing missions to get a sense of what they are like, and get on #openhatch and say you are interested!
 Modernize the OpenHatch Django codebase
Right now, the OpenHatch codebase can be described as a tangled mess of the best Django practices we could pick up in 2009. Do you want to learn about refactoring by trying it with a real codebase?
We do have high test coverage. That will help you dramatically if you want to take on this task! That is, the test coverage is what will keep you sane.
If you're excited about code quality, over the summer, here is kind of thing we'd like to see:
Discussing code style guidelines with us, coming to a consensus, and then documenting them. After we adopt and document them, we can look for tools that would help automatically find violations (like pep8.py or a specially-configured pychecker), and then you would work to keep the codebase working while making a sequence of small changes that fix these issues.
Make a diagram of our circular imports, and then propose a plan to decrease how circular they are. (You can use tools like http://chadaustin.me/2009/05/visualizing-python-import-dependencies/ to visualize the import dependencies.) Then, in small patches, move (or remove) code to simplify the import chain.
In the second half of the summer, we recommend reading through the custom code we have, and consider if a Django reusable app can replace it. I expect that in a large number of the cases, you can find external code to replace most of the OpenHatch app. (-:
The portfolio editing interface is an example of code that can be replaced by an external library (for example, something using Backbone.js on the JS side, and Tastypie on the Python side) and then much of our code can be thrown away.
Additionally, if you are excited about Django's new class-based views, you could consider porting the OpenHatch views to be class-based.
 Volunteer opportunity finder, version 2
This summer, you could take the current volunteer opportunity finder interface (at https://openhatch.org/search/) and make it more focused on projects rather than tasks.
This work would probably end up touching much of the web app. To make it really shine, I can imagine the following project plan:
- Review mockups and suggestions within the community, such as the recent conversations on the OH-Devel list.
- Adjust the interface at /search/ to be oriented around projects (i.e., open source communities) first.
- In doing the adjustment above, you might discover you need the web app to collect more information about open source projects. You would then need to change the project editing interface accordingly.
- You could conclude that we should automatically discover more information from external resources; we can discuss how to bring that information efficiently into the web app. (We have some experience with this and have become somewhat more dogmatic over the years, which will serve you well as you will save time by never making the architecture mistakes we did in the past!)
 Data-driven mentorship app
When a new contributor makes their first contribution to an open source community, if they are lucky, someone will notice the contribution, review it, and merge it. When the newcomer wants a new task to work on, it is still up to them to find it. When an active maintainer wants to be aware of which newcomers to ping to see if they are doing okay, or want more help, how to do that isn't clear.
If you want to change that, you can work with us on a distributed mentorship tracker. This has been pioneered and prototyped by the Ubuntu Developer Advisory Team, and it typically looks like the following:
- Each new contributor to an open source project is visualized as a draggable object.
- There is one column per state that the contributor could be in. They are ordered left-to-right, typically meaning "least experienced" to "most experienced". For example:
- Submitted their first bug
- Submitted their first patch
- Contributing actively
- Contributing expertly
- Should be invited to be a maintainer
- In the application process to be a maintainer
As mentors, we would leave comments on people, only visible to the other people we designate as mentors; and we would use those to coordinate sending emails to these newcomers and checking in with them. As we feel newcomers can be moved from one column to the next, a mentor would move them. Because the mentors are leaving notes about their mentorship, it becomes possible for another mentor to pick up where one left off, even if a mentor gets busy.
To make this work, you'll need to do a few different things, using a few different tools.
- First, you would need to make an HTML/JS mockup of the above.
- Then, you would need to make a simple Django app that could speak to the HTML/JS mockup.
- Then, you would one data extractor plugin for one community. If the Ubuntu team is interested in using this tool, we could start with doing that for Launchpad.
If we have time, I would love to add a bit of science to the process. We can talk about how add behavioral studies and random sorting to the process if you're interested in that.
You should also talk to Asheesh, your most likely mentor, about getting a tour of the Ubuntu tool.