Locked yourself out of Jenkins?
Posted: February 13, 2012 Filed under: Development | Tags: Jenkins, Tips 29 CommentsRemoved all permissions from your account did you? Save it did you? Feeling a bit stupid?
Yeah, me too!
First steps
SSH to your server and stop Jenkins
/etc/init.d/jenkins stop
Now modify the config XML
sudo vi /var/lib/jenkins/config.xml
You now have two options to regain access
Yeehaw way
Turn security off and remove the <authorizationStrategy> node
<useSecurity>false</useSecurity>
Now restart Jenkins and head over to your admin UI to resecure it quick before the trolls get in.
/etc/init.d/jenkins start
Like a boss way
If you want to be safe and not open up a security hole at all, you can add the security permissions into the config XML manually. Just replace USERNAME with own
<authorizationStrategy class=”hudson.security.ProjectMatrixAuthorizationStrategy”>
<permission>hudson.model.Computer.Configure:USERNAME</permission>
<permission>hudson.model.Computer.Connect:USERNAME</permission>
<permission>hudson.model.Computer.Create:USERNAME</permission>
<permission>hudson.model.Computer.Delete:USERNAME</permission>
<permission>hudson.model.Computer.Disconnect:USERNAME</permission>
<permission>hudson.model.Hudson.Administer:USERNAME</permission>
<permission>hudson.model.Hudson.Read:USERNAME</permission>
<permission>hudson.model.Hudson.RunScripts:USERNAME</permission>
<permission>hudson.model.Item.Build:USERNAME</permission>
<permission>hudson.model.Item.Configure:USERNAME</permission>
<permission>hudson.model.Item.Create:USERNAME</permission>
<permission>hudson.model.Item.Delete:USERNAME</permission>
<permission>hudson.model.Item.Read:USERNAME</permission>
<permission>hudson.model.Item.Workspace:USERNAME</permission>
<permission>hudson.model.Run.Delete:USERNAME</permission>
<permission>hudson.model.Run.Update:USERNAME</permission>
<permission>hudson.model.View.Configure:USERNAME</permission>
<permission>hudson.model.View.Create:USERNAME</permission>
<permission>hudson.model.View.Delete:USERNAME</permission>
<permission>hudson.scm.SCM.Tag:USERNAME</permission>
</authorizationStrategy>
Now restart Jenkins and sit back with a smug grin.
/etc/init.d/jenkins start

Smug Croissant Guy
Setting up a Jenkins build server on EC2
Posted: February 11, 2012 Filed under: Development | Tags: Amazon, CI, Cloud, Jenkins, Lean Startup, startups 9 CommentsIn my last post about setting up Jenkins I looked at how to do a basic Jenkins setup on an Ubuntu machine. In my case I set it up on an old machine which is fine when I’m working at home but if I make changes when I’m not at home or when my build machine isn’t running the changes are not built and tested. If you hadn’t worked it out, in a proper continuous integration environment you should be running your builds continuously. So this morning I set out to get an EC2 instance running Jenkins.
The other reason I wanted to have the build server running continually is that I need to start scheduling some jobs for Knowsis to do the NLP part of our process., which I could do with Cron, but buildservers liike Jenkins and Teamcity offer really flexible scheduling and a nice interface for feedback so I don’t need to worry about building one myself, for now.
Setting up an EC2 instance
The first step in the process is to set up your EC2 instance. Amazon kindly provide a free tier so you can get a free micro instance for a year. This should work for you initially if your builds aren’t overly complex.
I won’t run through exactly how to get your instance running as you can find plenty of guides online, if you are completely new to EC2 I would recommend this guide provided by Amazon.
One thing to note is that you should make sure you set up the security group for your image to allow all traffic on port 80 so you can actually see Jenkins.
Installing nginx
In my previous post I mentioned setting up nginx to route requests to Jenkins but didn’t cover it. So we’ll go though it here as we need a webserver running to host the requests coming through.
We’ll need to use YUM here as apt-get and aptitude aren’t installed. Thankfully the Amazon package index includes a version of nginx.
yum install nginx
Once installed we should start the nginx server to make sure that we can see our new EC2 instance before proceeding.
sudo /etc/rc.d/init.d/nginx start
You should be be able to hit your instance in a web browser. You can get the public hostname of your instance from the AWS management console, but it should look something like this:
http://ec2-XX-XX-XX-XXX.compute-1.amazonaws.com/
Installing Jenkins
In my previous post we used aptitude to install Jenkins but the Amazon Linux AMI doesn’t have the aptitude package manager, s owe have to use YUM instead.
First we need to add the repository to the list of YUM repos:
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
and then get the GPG key:
sudo rpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
then we can install Jenkins:
yum install jenkins
The installer will install the server as well as create jenkins user under which the service will run. You can now start the jenkins service:
sudo /etc/init.d/jenkins start
As Jenkins runs on port 8080 by default, the next step is to get nginx to proxy all requests on port 80 to port 8080. You could probably just change Jenkins to run on port 80 by default if you wanted. Anyway, just change your nginx config (/etc/nginx/nginx.conf) so that the server section reads as follows:
server {
listen 80 default;
server_name _;
location /{
proxy_pass http://127.0.0.1:8080;
}
}
I won’t go into the details of nginx setup, but this is the minimum required to get you to a point of having Jenkins working.
Make sure you restart nginx to take account of the config changes,
sudo /etc/rc.d/init.d/nginx restart
You should be be able to view the jenkins homepage in a web browser using the same url as before:
http://ec2-XX-XX-XX-XXX.compute-1.amazonaws.com/
Security
One of the topics mentioned but not covered previously was security. As your build server is now visible publicly, you will want to set up some sort of security to prevent people from doing bad things. The simplest way is to use Jenkins own user database, but there are other options to use an LDAP server or the underlying OS users. A few points to make sure that the server is secure:
- Disable the option to allow new users to sign up (unless you actually want people to be able to signup)
- Change the authorisation section to either allow logged in users to do anything or use matrix based security and make sure anonymous users have no permissions
Setting up builds
You should refer back to my previous post on how to get your builds set up.7
Let me know how you get on
Continuous Integration for Python
Posted: January 30, 2012 Filed under: Development | Tags: CI, Jenkins, Lean Startup, python, TDD 2 CommentsAfter beavering away at some ideas for Knowsis over the last 3 weeks and admittedly not really doing it test first , I spent this weekend finally getting round to setting up a CI server and some builds to run the pitiful number of tests that I have actually written to try and make me write more. It’s been bugging me all along but as the only developer at the moment it’s not been at the top of the priority list. However, my previous experience of setting this kind of thing up for legacy projects tells me that if I don’t get round to it soon, it will be infinitely more painful in the long run.
At 7digital we used Teamcity as our CI build server, but knowing how much their build agent licensing can cost I thought I would look at the open source alternatives seeing as we’re bootstrapping. After some research a nailed it down to the either BuildBot or Jenkins (formerly Hudson) and digging a bit deeper it seems that people with experience of both would suggest using Jenkins first until you realise it can’t do something that you really need BB for as it can be quite painful to get set up; Jenkins on the other hand is very simple to get set up.
Installing Jenkins
One of my favourite things about using Ubuntu, coming from a Windows background, is the ease of installing things using apt-get. These instructions are taken from the Jenkins site:
wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add - sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list' sudo aptitude update sudo aptitude install jenkins
Jenkins gets installed and set up to run as a daemon at startup under a newly created user Jenkins. It’s now usable at http://127.0.0.1:8080, however I wanted to be able to skip the need for a port number so set up nginx to proxy requests for me. This isn’t a necessary step so I won’t go into it here but there are some simple guides available on line if you haven’t used nginx before (just remember to restart nginx after you change the config, it’ll save you hours of head scratching!).
Setting up Jenkins to work with git
This step isn’t necessary if you don’t use git, but i’ll go into it as I do and it took me a bit of figuring out, plus there wasn’t a huge amount of info out there on how to do it.
From the home screen of Jenkins go to the plugins section:
Manage Jenkins -> Manage Plugins
In the Available tab find the “Jenkins GIT plugin” and check the install checkbox. If you use github you can also install the “GitHub plugin” which creates a link from your project page to your github repository and also allows you to use GitHub’s post receive hooks to notuify Jenkins when code has been committed (not necessary as you can use polling to check for changes). Your Jenkins instance will need to be exposed publicly for this to work, so make sure you set up user authentication properly; there’s also a plugin to allow you to use your GitHub logins as authentication if you want to use that.
Once you have selected the required plugins click ‘Download now and install after restart’ which will install the plugins and restart Jenkins, should take no more than a minute to complete.
Create SSH keys for Jenkins
You now need to set up your ssh keys for the Jenkins user. Open up a terminal window and switch to the Jenkins user
sudo -su jenkins
You can run through the creation of your public private key pairs as normal which will be created in the Jenkins user home directory (/var/lib/jenkins). If you want a guide for this, I have always found the one on the GitHub site help pages to be the easiest to follow.
Now set up a user with your git repository for your build slaves to run as and copy the contents of the public key to it. You can use your own account if you wish but I would recommend using a separate one.
Creating Jobs
Creating a Job in Jenkins is really simple. From the Jenkins dasboard click “New Job”. Enter the name of your job and select “Build a freestyle software project“. Click Ok.
For now you can ignore the options at the top of the next screen, head down to the source control section.
Source Code Management
If you installed the git plugin you should see git as an option here. Select the option that’s relevant and point it to the location of your repository.
Build Triggers
Further down is the “Build Triggers” section, you should select the option ‘Poll SCM‘ option, this will then present you with a schedule box that will allow you to enter the frequency to poll your SCM, it uses the cron format. A few examples
Every minute:
* * * * *
Every 10 minutes:
*/10 * * * *
Every hour :
@hourly
At 15 mins past every hour:
15 * * * *
Build Steps
So far the job will just checkout when there are any changes to your code, so now you need to make the job actually do something interesting. You can set up one of more build steps to run for your unit tests, tp deploy your code to a test environment, run your system tests, deploy your code to live etc.
At this point i’ll just run the unit tests. If you have written some tests using unitest syntax you can use the nose test runner (nosetests) to automatically discover and run these tests. You can also get it to output the test results in Junit report format so that Jenkins can display your test results. You will need to make sure that nose is installed on your build server and slaves for this to work.
Select “Excute Shell” in the “Add Build Step” dropdown and add the following line:
nosetests --with-xunit
The shell script will run from the top level of your project (known as the workspace root) so if nose cannot auto discover your tests because they are buried in a folder tree structure you can always add a cd command to switch to that directory first. The –with-xunit switch will output an xml report in the Junit format called nosetests.xml into the folder under which nosetests ran.
In the Jenkins set up there is a “Post-build Actions” section under which you should select “Publish Junit test report results” and enter nosetests.xml
If you use some other test format or want to use another test runner enter the shell command that would execute those tests remembering to install anything required onto your build server as well.
Now Go And Write Some Tests
That’s it, you now have a build set up to run your tests every time you check in changes, so there is no excuse not to write any. This is only a start and there a plenty of other things you might want to set up like failure notifications, test reports, dashboards etc so the best thing to do is explore the Jenkins site.
Using StructureMap with Solrnet – updated
Posted: August 12, 2010 Filed under: Development | Tags: c#, Solr, SolrNet 9 CommentsThe implementation has now been updated to allow for multi-core instances. Which can be set up in your Bootstrapper like this:
var solr = (SolrConfigurationSection)ConfigurationManager.GetSection(“solr”); var solrServers = solr.SolrServers; ObjectFactory.Initialize(
x => x.AddRegistry(new SolrNetRegistry(solrServers))
);
You’re app config should look like the the following:
<configuration>
<configSections>
<section name="solr"
type="StructureMap.SolrNetIntegration.Config.SolrConfigurationSection, SolrNet" />
</configSections>
<solr>
<server id="myobject" url="http://localhost:8080/solr/"
documentType="Your.Objects.Name, Your.Objects.Namespace" />
</solr>
</configuration>
Using StructureMap with SolrNet
Posted: April 21, 2010 Filed under: Development | Tags: c#, Solr, SolrNet 2 CommentsSolrNet is a great .NET library for querying and updating a Solr instance. I’ve been using it recently as part of a project in which we were using StructureMap as our IOC framework (like most of our projects). It has its own built in IOC based on the Common Service Locator Interface (Microsoft.Practices.ServiceLocation) as well as support for Windsor and Ninject.
As I didn’t want to switch our IOC Framework I decided to write a registry class. It’s now very simple to register the container using the following line in your bootstrapper.
ObjectFactory.Initialize(
x => x.AddRegistry(
new SolrNetRegistry("http://localhost:8893/solr")
)
);
Its now included in the git master and the binaries should be available shortly.
Updated: The structuremap adapter now allows for multicore configuration
Azure – first impressions
Posted: January 14, 2010 Filed under: Development, Tech | Tags: Windows Azure Leave a commentI signed up to a Windows Azure account earlier this week as i’m working on a project that I need hosting for, details of which will follow in due course. I had a look at a few hosting providers including (EC2 and Rackspace) and seeing as Azure is free throughout January and at 50% discount for 6 months, I thought I would give it a go.
Here’s my first impressions:
- Signup
The signup process was far too long and overly complicated and on top of that the setup process just wouldn’t work for me in Google Chrome. - Setup
The principles behind Azure are quite new – unlike EC2 there is no actual Windows Server OS running on a VM that you can just access via Remote Desktop, you need to set up the VM via a UI.
You first have to set up a project, which is really the notion of the server itself. The project can then have multiple services running on it. The available services are Windows Azure (compute & storage) , Sql Azure (data) and AppFabric (service bus). The services can then have different roles which are the actual applications, for example the compute service can have a web role which is a web application or a worker role which is a windows service.
For someone who’s used to administering a server and using the managment tools to set up websites etc this seems a bit like the basic version of a settings panel. I understand the reasons behind it, making it accessible for those not used to server administration, but I feel like I’m being treated like someone that shouldn’t be allowed to touch the advanced settings. - Deployment
The deployment process is actually very simple, although there are a few annoyances.
Building a Cloud Service in Visual Studio creates two files, the application package (.cspkg) and a configuration file (.cscfg) which need to be uploaded through the relatively clean Azure web interface. Strangely the Beta of Visual Studio 2010 creates the package files if you right click the Cloud Service project and click publish, but not if you click publish on the Build menu.
Once you have your packages, you can deploy the service to a staging server first without any additional setup. A url is provided for the test server so you can do some testing and then once you are happy with it you can deploy it over to the production environment with one click. Once deployed the service can be started, stopped, configured and deleted from the UI.At work we have a Continuous Integration environment set up with Teamcity running as our buildserver and automated deployments to our webservers, as nice as the Azure UI is, it would be good if the deployment is scriptable so it could be run automatically. I’m hoping at the very least that this becomes, if it isn’t already, part of Visual Studio and TFS.
The deployment process was really really slow and my deployment failed numerous times without any feedback as to why which made it impossible to debug. I still no idea why it wouldn’t deploy – I just recreated the Cloud Service from scratch and it worked.
I see this as ideal for a web developer who’s got no idea about server administration. I’ll persevere with it for now or at least until the end of the free period (end of January) but my patience is already starting to wain and the idea of EC2 is certainly quite appealing, despite the price differential.