Friday, December 1, 2023

MyLearning - MongoDB with Python

What's the goal of this learning?

Quickly get hands dirty with the concepts of MongoDB by using Python

Setup

What is MongoDB Atlas?

It is a cloud service provided by MongoDB, and I will utilize this cloud instance instead of a local one. I signed up for it at the following URL using my Google account to avail free subscription.

Create a cluster

  • To create cluster through UI follow this doc and you'll get the username and password
  • [Optional] To create cluster using CLI
    • First, install Atlas CLI
      • brew install mongodb-atlas
    • Connect the Atlas CLI to account
      • atlas auth login
    • atlas clusters create [name]
      • atlas clusters create siddesh 

PyMongo - The Python MongoDB Driver library

  • pip install pymongo

Compass - GUI application

  • Refer this page for installation 

Python Code

Thursday, October 1, 2020

Good resources to get started with AWS Lambda (python)

This blog is the collection of the articles I have referred to get myself hands dirty with AWS Lambda. I have meticulously referred and personally tried each of these articles, before listing them here.

I have personally focused on articles dealing Lambda functions in Python programming language, since Python is my first preference programming language.

Begin with ....

The following list of articles enables to get a quick introduction to AWS Lambda 
  • The AWS Lambda Tutorial gives a broader introduction to Lambda, particularly the need for it, the building blocks associated with it (Lambda function, Event Source, Log streams), comparison of Lambda with EC2 and Elastic Beanstalk, benefits, limitation, pricing, use-cases and finally a simple Hands-on example. But it's better to follow the hands-on example from AWS documentation given in the next bullet point, since the instructions in this page is not up-to date.
  • The Create a Lambda function with the console is a quick tutorial from AWS, to get hands dirty. It takes less than 5 mins to complete the instructions. This is very basic and you'll not have much fun at the end of it.
  • The AWS Lambda with Python: A Complete Getting Started Guide is a better example to start. It demos the use of environment variables and the process to encrypt secret values using AWS KMS and then decrypting the encrypted values in lambda function. The code sample to decrypt didn't work for me, the below code worked, which is generated by AWS itself while encrypting the env variable.

import os
import boto3
from base64 import b64decode

DB_HOST = os.environ["DB_HOST"]
DB_USER = os.environ["DB_USER"]
 
ENCRYPTED = os.environ['DB_PASS']
DECRYPTED = boto3.client('kms').decrypt(
    CiphertextBlob=b64decode(ENCRYPTED),
    EncryptionContext={'LambdaFunctionName': os.environ['AWS_LAMBDA_FUNCTION_NAME']}
)['Plaintext'].decode('utf-8')


def lambda_handler(event, context):
    print("Connected to %s as %s" % (DB_HOST, DB_USER))
    print("and the secret is %s" % DECRYPTED)
    return None

Step-up ... 

The tutorials in the previous section just dealt with creating Lambda functions from AWS console UI and triggering them from the 'Test' interface from the same console UI. The next set of tutorials guides about triggering the Lambda function with an http endpoint provided by api gateway.

Articles to refer further

Friday, October 5, 2018

Gradle: Publish artifacts to maven repo (nexus) - an working example

Refer my github repo https://github.com/siddeshbg/gradle-uploadArchives for a working  demonstration of Gradle way of publishing artifacts to Maven repos (both local and Nexus).

The README.md has a detailed description of how it works

Teamcity: Maven Artifact Dependency Trigger - an example

I'm back to blogging after a year break. This blog is about the Teamcity "Maven Triggers". I didn't find a good example usage of this build trigger and hence this blog.

Teamcity by default provides many build trigger options and the most used could be "VCS Trigger" or "Finish Build Trigger" or "Schedule Trigger". But I want to focus on the less used "Maven Triggers"

This Maven trigger will monitor a maven repository for the configured GAV (GroupID:ArtifactID:version) co-ordinates and if it notices any new version published, it will trigger  a build. We wanted to use this feature to avoid configuring branch based jobs (build-configs).

Teamcity provides two variants of this "Maven Trigger"

  1. Maven Snapshot Dependencies Trigger
  2. Maven Artifact Dependency Trigger

This blog explains the usage of second variant.

How it works?
We have a build which publishes it's artifact to Nexus maven repo as shown below



In this case, some "source" build is publishing an artifact with GroupID, ArtifactID and versions as marked in above pic.

Now we have another build which needs to trigger whenever that "some" build publishes to Nexus.



You add a new trigger by choosing "Maven Artifact Dependency Trigger" and provide the Nexus details where that "some" build is posting, like

  • GroupID (ex: temp), 
  • ArtifactID (ex: greeterApp) 
  • Version range 
    • We can configure versions in various ways. Refer TC documentation https://confluence.jetbrains.com/display/TCD18/Configuring+Maven+Triggers
    • In this case, we have configured it with [3.1,), this means trigger build whenever a version >= 3.1 gets published.
  • Type (ex: jar)
  • Maven repo url (ex: http://nexus.myorg.com/repository/third-party-lib)

Now whenever that "some" build publishes atifacts with latest version like "3.5", this build get's triggered.

In this case, the build overview page "Triggered by:" section says
Triggered by:        temp:greeterApp:[3.1,) on 05 Oct 18 06:21
Note: I have not documented the Nexus authentication required, since it was already setup in my case and I didn't need to worry about it.

Wednesday, September 6, 2017

Testing Ansible roles with Molecule - tutorial

Molecule provides a clean way of testing Ansible roles which you write. Molecule can spin a docker container and run your Ansible role on it and report back the test results. In this blog I'm going to demonstrate a simple use case of Molecule to test a simple Ansible role on a Docker container.

Prerequisites

Install the following on your workstation. I'm using Mac and the below installation commands represent Mac
  • Docker 
    • brew install docker
    • Start docker service on Mac
  • Ansible
    • pip install ansible
  • Molecule
    • pip install molecule

Sample code

The sample code used for this tutorial are hosted on GitHub at https://github.com/siddeshbg/molecule_tutorial

Here are the files involved
  1. molecule.yml
  2. playbook.yml
  3. roles/base/tasks/main.yml
  4. tests/test_default.py

How it works?

The unit test cases are written using Python based TestInfra. You don't need to install it separately, since molecule package include this.

Our goal is to test an Ansible role. In this example we want to test the role ...

cat roles/base/tasks/main.yml
---
- name: Install Aptitude
apt:
name: aptitude
state: present
 This is a simple role where, we want to install the package "aptitude" on the desired machine.

Now the role is ready and we want to use molecule to test it. Molecule can launch a Vagrant machine, or Docker machine or AWS machine or something else to test it. But in this example we will configure molecule to launch Docker.

We can init molecule by calling "molecule init --provider docker", which creates basic configuration files. I have hosted selectively few files on GitHub, which are generated by this command.

The "molecule.yml" is the main configuration file. In the "molecule.yml" file we insist Molecule to launch Docker

cat molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
docker:
containers:
- name: ansible-siddesh
image: ubuntu
image_version: latest
privileged: true
verifier:
name: testinfra
Here we insist molecule to use Docker as driver, use Docker image "ubuntu:latest" and create container out of this image by the name "ansible-siddesh". Also we are using the default verifier Testinfra.

The "playbook.yml" file is the default Ansible playbook, molecule will look for. In case if we have named our playbook differently, then we can specify that in the "molecule.yml" file as
---
ansible:
  playbook: myplaybook.yml
Let's look the content of our "playbook.yml"

cat playbook.yml
---
- hosts: all
roles:
- role: base
This is a simple playbook, which basically calls the role "base", we want to test.

Next we need to write our Testinfra based test case

cat tests/test_default.py
import testinfra.utils.ansible_runner

testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
'.molecule/ansible_inventory').get_hosts('all')

def test_hosts_file(File):
    f = File('/etc/hosts')

    assert f.exists
    assert f.user == 'root'
    assert f.group == 'root'

def test_packages(Package):
    assert Package("aptitude").is_installed
We have defined 2 tests here. One is to check whether "/etc/hosts" file exists and owned by root and another test is to ensure that aptitude package is installed.

Now it's time to run the tests with molecule, by running
$ molecule test
--> Destroying instances...
--> Checking playbook's syntax...
playbook: playbook.yml
--> Creating instances...
--> Creating Ansible compatible image of ubuntu:latest ...
Creating container ansible-siddesh with base image ubuntu:latest...
Container created.
--> Starting Ansible Run...
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [ansible-siddesh]
TASK [base : Install Aptitude] *************************************************
The following additional packages will be installed:
 0 upgraded, 38 newly installed, 0 to remove and 26 not upgraded.
changed: [ansible-siddesh]
PLAY RECAP *********************************************************************
ansible-siddesh            : ok=2    changed=1    unreachable=0    failed=0

--> Idempotence test in progress (can take a few minutes)...
--> Starting Ansible Run...
Idempotence test passed.
--> Executing ansible-lint...
--> Executing flake8 on *.py files found in tests/...
--> Executing testinfra tests found in tests/...
============================= test session starts ==============================
platform darwin -- Python 2.7.13, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
rootdir: /Users/siddesh.gurusiddappa/work/github/molecule_tutorial, inifile:
plugins: testinfra-1.5.5
collected 2 itemss

tests/test_default.py ..
=============================== warnings summary ===============================
None
  Module already imported so can not be re-written: testinfra
-- Docs: http://doc.pytest.org/en/latest/warnings.html
===================== 2 passed, 1 warnings in 0.66 seconds =====================
--> Destroying instances...
Stopping container ansible-siddesh...
Removed container ansible-siddesh.

The command "molecule test" has done lot many things

  • It checked the playbook syntax
  • It created the docker container using the image "molecule_local/ubuntu:latest"
    • Alternatively if you just want the docker instances to be created without doing anything else, you can run the command "molecule create"
  • It ran our Ansible role
    • Alternatively you can run "molecule converge", which will create docker container and run Ansible role on it.
  • Next it tests whether role is idempotent (repeated calling will produce same result)
  • Next it runs our tests. As you can see, both of our tests passed.
    • Alternatively you can run "molecule verify" after running "molecule converge" to run tests
  • Next it destroys the container 
    • You can run "molecule destroy"
I like to use molecule while developing Ansible roles. Basically it provides me Docker container to test my role. I start by writing a role and then run "molecule converge", which creates a docker container with the role executed and then I login to container using "docker exec -it container-id /bin/bash" and validate my role execution manually. 




Friday, July 14, 2017

Dockerfile Linter --> projectatomic/dockerfile-lint


projectatomic/dockerfile-lint is one of a Dockerfile Linter which can parse and detect syntactical errors in Dockerfile

Usage

It can be called using Docker CLI as shown below

docker run -i --rm -v `pwd`:/root:ro projectatomic/dockerfile-lint dockerfile_lint

--------INFO---------

INFO: There is no 'EXPOSE' instruction. Without exposed ports how will the service of the container be accessed?. 
Reference -> https://docs.docker.com/engine/reference/builder/#expose


INFO: There is no 'CMD' instruction. None. 
Reference -> https://docs.docker.com/engine/reference/builder/#cmd
In the above command, I had a Docker file in my current directory and my current directory is mounted under /root inside the container. The above command created the container from the image projectatomic/dockerfile-lint and it ran the command dockerfile_lint from inside the container. The dockerfile_lint is a node script available in the PATH inside the container.

The above run gave 2 warnings that there is no "EXPOSE" and "CMD" instruction defined in the provided Dockerfile.

Here is an example of a Dockerfile with error

docker run -i --rm -v `pwd`:/root:ro projectatomic/dockerfile-lint dockerfile_lint
/opt/dockerfile_lint/lib/parser.js:454
            while ((isComment(lines[i].trim()) || !lines[i].trim())  && (i < lines.length)) {
                                      ^

TypeError: Cannot read property 'trim' of undefined
    at Object.parse (/opt/dockerfile_lint/lib/parser.js:454:39)
    at Linter.validate (/opt/dockerfile_lint/lib/linter.js:106:27)
    at lint (/opt/dockerfile_lint/bin/dockerfile_lint:85:33)
    at lintDockerFile (/opt/dockerfile_lint/bin/dockerfile_lint:104:9)
    at Object.<anonymous> (/opt/dockerfile_lint/bin/dockerfile_lint:144:5)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
Can you guess the issue in the Dockerfile which I provided by looking at the error?  Hell!!
I ended RUN command with "&&" , but I haven't provided next statement, as shown below

RUN apk --update add sudo python py-pip && \
    apk --update add build-dependencies python-dev && \
I hope this tool returns some better error message !!






Wednesday, May 17, 2017

Jenkins groovy script to find the node on which the last build built for a job

The below code is intended to be run in Jenkins script console

Problem

I want to find the jenkins node on which the last build of the job "my-job-name" built. This script determined that, it ran on centos5-x64-01

Code

def getJobs() {
 def hi = hudson.model.Hudson.instance
 return hi.getItems(hudson.model.Job)
}

def getBuildJob(String jobNam) {
 def buildJob = null
 def jobs = getJobs()
 (jobs).each { job ->
 if (job.displayName == jobNam) {
 println("Found")
 println("Exiting job search")
 buildJob = job
 return buildJob
 }
 }
 return buildJob
}

job_name = "my-job-name"

job_id = getBuildJob(job_name)

last_build = job_id.getBuilds()[0]

println("Recent Builds of the job " + job_name + " " + job_id.getBuilds())
println("last_build="+last_build)

println("The last_build of Job " + job_name + " builton the node " +last_build.getBuiltOn())


Output

Found
Exiting job search
Recent Builds of the job my-job-name [my-job-name #1474, my-job-name #1473, my-job-name #1472, my-job-name #1466, my-job-name #1421]
last_build=my-job-name #1474
The last_build of Job my-job-name builton the node hudson.slaves.DumbSlave[centos5-x64-01]

Reference

Jenkins groovy script to list busy nodes

The below code is intended to be run in Jenkins script console

def busy = [];
busy = hudson.model.Hudson.instance.slaves.findAll { it.getComputer().countBusy() > 0 }
out.println("Busy nodes: "+busy.collect { it.nodeName }.join(", "))

Sunday, May 14, 2017

Jenkins Elastic Build slaves with Apache Mesos

In this blog, I'll share my experience of setting up Jenkins server to obtain slaves dynamically from Mesos. You can get the broader picture of this, from the reference links provided in the end.
This is a very simple test setup. Basically install Jenkins, Mesos on my Mac laptop and run a simple hello-world jenkins job, which will get executed on a mesos cloud.

My Environment

  • Mesos - 1.20
  • Jenkins - 2.46.2
  • Mac OS - Sierra (10.12.3)

Installation

  • Mesos - There is no pre-built installer. We need to get the code and build it. Refer http://mesos.apache.org/documentation/latest/getting-started/ for complete details. Below are the commands I ran on my MacBook to install mesos
    • Download mesos code
    • Extract
      • tar -xvfz mesos-1.2.0.tar.gz
    • Install mesos build pre-requisites
      • brew install Caskroom/cask/java (I already had java and hence didn't run this)
      • brew install wget git autoconf automake libtool subversion maven
        • Got errors with git, subversion, maven, since I already had old version. It asked me to upgrade. Upgraded them with below command
        • brew upgrade git
        • brew upgrade maven
        • As per mesos doc, chose to uninstall existing subversion and re-install
          • brew unlink subversion
          • brew install subversion
      • I had python pip & virtualenv and hence I skipped installing them
    • Build the mesos code
      • cd mesos-1.2.0
      • ./bootstrap
      • ./configure CXXFLAGS=-Wno-deprecated-declarations
      • make
      • make check
    • Start Mesos master
      • sudo su -
      • mkdir /var/lib/mesos
      • ./bin/mesos-master.sh --ip=127.0.0.1 --work_dir=/var/lib/mesos
    • Start mesos agent
      • Open new terminal
      • cd mesos-1.2.0
      • ./bin/mesos-agent.sh --master=127.0.0.1:5050 --work_dir=/var/lib/mesos
    • Test/Verify your mesos
      • Browse http://127.0.0.1:5050
      • Run test frameworks
        • C++:   ./src/test-framework --master=127.0.0.1:5050
        • Java: ./src/examples/java/test-framework 127.0.0.1:5050
        • Python: ./src/examples/python/test-framework 127.0.0.1:5050 . It gave me error and I didn't bother to troubleshoot it
  • Installing Jenkins
    • Download jenkins-*.pkg file and install it on your laptop
    • Refer https://jenkins.io/download/
  • Install Jenkins mesos plugin
    • Browse your local jenkins : http://localhost:8080 -> Manage Jenkins -> Manage plugins -> Available -> Search Mesos -> Install and reboot jenkins (if required)
  • Configure mesos details in your Jenkins
    • Browse to http://localhost:8080/configure -> Cloud -> Add a new cloud
    • Give the mesos client binary path under "Mesos native library path". Since I have built mesos in Mac, the shared library will be "libmesos.dylib". For linux it will be libmesos.so
    • Provide the "Mesos Master [hostname:port]", Description and retain the rest of defaults. By default the mesos host label will be "mesos". You need to use this label while configuring job
    • Test the mesos connection by clicking "Test Connection"

Configure Jenkins job to use mesos

  • Create a simple HelloWorld Freestyle project
  • Importantly, you need to specify the label "mesos" under "Restrict where this project can be run"

Test

  • Run the HelloWorld project
  • As you can observe, this job got executed on a mesos host mesos-jenkins-ddd4afdc45d7490c80a9706889044586-mesos

Reference

Tuesday, April 11, 2017

yamllint ( YAML Linter)

The `yamllint` is basically a program to check syntax of YAML files. If you got YAML files in your project, you can ask your CI (continuous integration) to check for YAML syntax.

What is yamllint?

A linter for YAML files

What is Linting?

Linting is the process of running a program that will analyse code for potential errors.

A simple way to install it

pip install yamllint

A simple usage

yamllint .

Output:
./my1.yml
  1:1       warning  missing document start "---"  (document-start)
  19:7      error    wrong indentation: expected 8 but found 6  (indentation)
  22:9      error    wrong indentation: expected 10 but found 8  (indentation)
  35:9      error    wrong indentation: expected 10 but found 8  (indentation)
  40:81     error    line too long (86 > 80 characters)  (line-length)
  46:9      error    wrong indentation: expected 10 but found 8  (indentation)
  49:7      error    wrong indentation: expected 8 but found 6  (indentation)

./my2.yml
  1:1       warning  missing document start "---"  (document-start)
  6:7       error    wrong indentation: expected 4 but found 6  (indentation)
  9:5       error    wrong indentation: expected 2 but found 4  (indentation)
  11:5      error    wrong indentation: expected 6 but found 4  (indentation)

It recursively checks all the yaml files in the current directory

Can we guide yamllint what to check in configuration file?

Create a config file
cat .yamllint.yaml

extends: default

rules:
  # 90 chars should be enough, but don't fail if a line is longer
  line-length:
    max: 90
    level: warning

  # Disabling the document-start error messages
  document-start:
    present: false

yamllint -c .yamllint.yaml .


Wednesday, February 22, 2017

Simple Gradle file to just download dependencies

If you are managing repositories like Nexus or Artifactory, you often receive complaints from developers that, there dependency is not getting downloaded, though it is present in Nexus/Artifactory. Particularly if you are hosting a local mirror repository (for example for local India developers), then they will doubt on mirror caching as well. But you can always browse the artefacts at there expected paths in Nexus/Artifactory and if it is found there and still developer complains that it is not getting downloaded, then you need to troubleshoot either issue with Gradle or Maven config file or could be network issue as well.

In this blog post, I'm just posting simple Gradle file to just download a dependency, which will help us to debug download issues.

Where Gradle stores artifacts locally on our machine?

Typically at $HOME/.gradle/caches/modules-2/files-2.1/<GROUP-ID>/<ARTIFACT-ID>/<VERSION>

Simple Gradle file to download dependencies

$ cat build.gradle 
apply plugin: 'java'

repositories {
   //The properties nexusMavenRepos, localNexusBaseUrl, etc will be defined gradle.properties file as shown in the end
   nexusMavenRepos.split().each { repo ->
            maven { url(localNexusBaseUrl + repo) }
        }
}

//Below code contacts external maven central repo. comment it, if you don't want to go external
repositories {
    mavenCentral()
}

dependencies {
    testCompile 'junit:junit:4.8.2'
    compile 'commons-beanutils:commons-beanutils:1.8.3'
    testCompile group: 'com.typesafe.akka', name: 'akka-http-core_2.11', version: '2.4.4'
}

Then run gradle dependencies task

$ gradle dependencies 

or


$ gradle dependencies --refresh-dependencies    #In case if you want to force gradle to download

This will download the requested artefacts under $HOME/.gradle/caches/modules-2/files-2.1/ 

// typical gradle.properties file for reference
$ cat gradle.properties
nexusBaseUrl = https://my-org-nexus/nexus/content/repositories/
localNexusBaseUrl = https://my-org-blr-nexus/nexus/content/repositories/
nexusMavenRepos = central my-org-third-party-lib

Monday, January 16, 2017

sonarqube postgresql db backup and recovery

postgresql is one of the DB supported by sonarqube. In this blog I'll list out the steps involved in taking back-up of postgresql DB used by sonarqube.

As described in the official postgresql 'Backup and Restore' documentation, there are 3 different approaches to back-up postgresql Databases. This blog uses the 'SQL Dump' approach.

In case if you are trying to install sonarqube on CentOS 7 with postgresql as DB, refer this http://frederic-wou.net/sonarqube-installation-on-centos-7-2/ url. It explains the steps in detail.

Creating Database Dump

The sonarqube server stores it's data on a database named 'sonar' under postgresql. So create dump of this database using one of these commands
  • pg_dump -U postgres -F t sonar > sonar_db_dump.tar
    • -U postges : Specifying the username to connect to DB and in this case it is 'postgres'
    • -F t: the format of dump created and in this case it is created in 'tar' format.
    • 'sonar' is the database name to create dump
or
  • pg_dump -U postgres sonar > sonar_db_dump.tar
    • This creates dump in the plain text format

Restoring database dump

You can use either 'psql' or 'pg_restore' utility to restore the database dump. I have used 'pg_restore' for the advantages described in this http://www.postgresqltutorial.com/postgresql-restore-database/ link
  • First install a plain sonarqube server. You can refer http://frederic-wou.net/sonarqube-installation-on-centos-7-2/ for centos based installation instructions
    • Don't start sonarqube server
  • Restore the Database dump
    • su - postgres 
    • pg_restore -U postgres --dbname=sonar --verbose /tmp/sonar_db_dump.tar
  • Start the sonarqube server
    • /opt/sonarqube-5.4/bin/linux-x86-64/sonar.sh start
    • /opt/sonarqube-5.4/bin/linux-x86-64/sonar.sh status
    • tail -f /opt/sonarqube-5.4/logs/sonar.log
    • Ensure sonarqube started successfully by referring the sonar.log
  • Browse to the new sonarqube server http://<ip>:9000 and there you can see the restored projects

Reference






Thursday, January 12, 2017

Setting up a GitLab Specific runner on CentOS

In this post, I'll list the steps which I followed to configure a GitLab runner on the same host where GitLab server is running.

What are Runners in GitLab?
You can run builds on your merge request or after push in GitLab. Traditionally we used to run builds on a separate CI server like Jenkins. But though GitLab is a Git repository, it is more than that. The 'Runners' are the virtual machine on which GitLab CI runs your build.

My GitLab host environment

  • GitLab v8.15.4 EE deployed on AWS
  • OS: CentOS 7.2

Creating a Runner

  • Install docker
    • If you want GitLab to run your builds inside a docker container, you need to install docker

      • curl -fsSL https://get.docker.com/ | sh
      • This script adds the docker.repo repository and installs Docker.
      • sudo systemctl enable docker.service
        • Enable the service
      • sudo systemctl start docker
        • Start the Docker daemon
      • Verify docker is installed correctly by running a test image in a container.
        • sudo docker run --rm hello-world
  • Add GitLab's official repository via Yum
    • curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.rpm.sh | sudo bash
  • Install gitlab-ci-multi-runner
    • sudo yum install gitlab-ci-multi-runner
  • Register the runner
    • sudo gitlab-ci-multi-runner register
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com )
https://127.0.0.1/ci
Please enter the gitlab-ci token for this runner
WKC4P9FuurUtQZ4h2xCH
Please enter the gitlab-ci description for this runner
master-runner
INFO[0034] fcf5c619 Registering runner... succeeded
Please enter the executor: shell, docker, docker-ssh, ssh?
docker
Please enter the Docker image (eg. ruby:2.1):
ruby:2.1
INFO[0037] Runner registered successfully. Feel free to start it, but if it's
running already the config should be automatically reloaded!
    • I wanted my runner to run the same host where GitLab is running and hence I entered https://127.0.0.1/ci for the gitlab-ci coordinator URL
    • To get the gitlab-ci token, browse to
      • Project -> Settings wheel -> Runners -> Specific Runners -> Use the following registration token during setup: WKC4P9FuurUtQZ4h2xCH
  • Verify that Runner is successfully activated
    • Project -> Settings wheel -> Runners -> Runners activated for this project -> You should see a green colour runner

Verifying runners with command line

  • gitlab-runner list
    • List the registered runners
  • gitlab-runner verify
    • This command checks whether registered runners can connect to GitLab server
  • Unregister
    • gitlab-runner unregister --url http://127.0.0.1/ci --token ajsbgjav
    • or
    • gitlab-runner unregister --url http://127.0.0.1/ci --name master-runner

Reference