Sunday, December 31, 2017

Google Cloud Platform Inventory

I just finalized what I believe is the simplest possible way to extract a Google Cloud Platform Inventory (GCP). Unfortunately GCP does not support triggers for functions as Amazon Web Services (AWS) does with Lambdas so we are (at least for now) forced to use cron. Google Customer Support did issue a feature request and hopefully they will deliver at some point a truly serverless scheduler for Cloud Functions. When GCP supports triggers the code should be almost ready as it uses nodejs. BTW working with async/await makes nodejs really attractive for OS related tasks (read DevOps).

Thursday, December 21, 2017

Tail your logs from a web interface - rtail

Let us say you want to see realtime some logs like those coming from your lean release, deploy and e2e pipeline. Here is how to use rtail to expose logs streams on a web interface:
    Expose streams using rtail-server
    sudo npm install -g rtail
    mkdir ~/rtail-server
    cd ~/rtail-server
    
    Create a process.yaml in that directory
    apps:
      - script : rtail-server
        name: 'rtail-server'
        instances: 0
        exec_mode: cluster
        merge_logs: true
        args: "--web-port 8080 --web-host 0.0.0.0"
    
    Run it
    pm2 start process.yaml
    pm2 save
    # logs are in ~/.pm2
    
  • Stream logs via rtail client with simple and effective cron:
    * * * * * ( flock -n 1 || exit 0; tail -F /var/log/ci/release.log | rtail --id "release.log" ) 1>~/rtail-release.lock
    * * * * * ( flock -n 2 || exit 0; tail -F /var/log/ci/gke-deploy.log | rtail --id "deploy.log" ) 2>~/rtail-deploy.lock
    * * * * * ( flock -n 3 || exit 0; tail -F /var/log/ci/e2e.log | rtail --id "e2e.log" ) 3>~/rtail-e2e.lock
    
  • At this point http://localhost:8080 should list the available streams and the log traces coming in from them.
  • WARNING: At the time of this writing there is a stream mixed output bug you should be aware of (https://github.com/kilianc/rtail/issues/110) . To go around it use the below:
    sudo cp /usr/local/lib/node_modules/rtail/cli/rtail-server.js /usr/local/lib/node_modules/rtail/cli/rtail-server.js.old
    sudo curl https://raw.githubusercontent.com/mfornasa/rtail/ed16d9e54d19c36ff2b76e68092cb3188664719f/cli/rtail-server.js -o /usr/local/lib/node_modules/rtail/cli/rtail-server.js
    ps -ef | grep rtail-server | grep -v grep | awk '{print $2}' | xargs kill
    diff /usr/local/lib/node_modules/rtail/cli/rtail-server.js /usr/local/lib/node_modules/rtail/cli/rtail-server.js.old
    
    Just refresh your browser and wait till the streams show up again.

Thursday, December 14, 2017

Close your SpiceWorks account

Incredibly difficult to get to it, no link anywhere. I had to dig into very old posts until actually I run into https://community.spiceworks.com/profile/close_account. After closing I was redirected to https://community.spiceworks.com/profile/show/ but such resource is broken:

If you have other accounts to close you will need to kill your cookies for the domain because you will be redirected otherwise to the above page.
The good news is that it worked.

Saturday, December 09, 2017

Migrating Spiceworks to JIRA Service Desk

Let's keep this simple. I will consider a Spiceworks installation that defaults to sqlite3. It is amazing how much this can handle BTW. It gets slow but man, I saw recently over 1GB of sqlite data handled by a SpiceWorks installation. Well, I did know sqlite rocks and not just in mobile devices.
  1. Install sqlite3 (command line) in the Spiceworks server
  2. Copy the db (for example from C:\Program Files (x86)\Spiceworks\db) to the sqlite bin directory
  3. Access the db from sqlite
    sqlite3 spiceworks-prod.db
  4. From sqlite prompt export relevant fields:
    .headers on
    .mode csv
    .output spiceworks-prod.csv
    select tickets.id as ticket_id,
      tickets.created_at,
      tickets.summary,
      tickets.description,
      (select email from users where users.id = tickets.assigned_to) as assigned_to,
      tickets.category,
      tickets.closed_at,
      (select email from users where users.id = tickets.created_by) as created_by,
      tickets.due_at,
      tickets.status,
      tickets.updated_at,
      tickets.summary,
      group_concat(comments.created_at || " - " || (select email from users where users.id = comments.created_by) || " - " || comments.body, " [COMMENT_END] ") as comment
      from tickets, comments
      where comments.ticket_id=tickets.id
      group by ticket_id
      order by comments.ticket_id,comments.id;
    
  5. Use JIRA CSV File Import, point to generated file spiceworks-prod.csv, select file encoding UTF-8, date format yyyy-MM-dd HH:mm:ss, leave imported users inactive, map status field (closed to Done; open to Open)
  6. When done importing, save configuration and the import logs
  7. Optional: If you are into lean thinking you might want to read a bit about classes of service and triage systems or just trust me that this is the simplest way to prioritize your work. To that end go to JIRA Service Desk settings / issues / priorities and use them as class of service. You will need to keep only three and change their name (Mark standard as default):
    • Expedite: There is no workaround. There is a tangible impact to the business bottom line.
    • Fixed Delivery Date: There is no workaround. It must be done before certain date. It impacts the business bottom line
    • Standard: First In First Out. There is a workaround. It impacts the business bottom line.

Friday, December 08, 2017

Correctly generate CSV that Excel can automatically open

Software generating CSV should include the byte order mark (BOM) at the start of the text stream. If this byte is missing programs like Excel won't know the encoding and functionality like just double clicking the file to open it with Microsoft Excel won't work as expected in Windows neither MAC.

You might want to do a simple test yourself. Let us say that you have a BOM missing UTF-8 CSV and when opened in Excel it renders garbled text. If you open such file in Notepad and save it back with a different name, selecting UTF-8, the new file will be rendered correctly. If you compare the two files (using a nix system) you will notice the difference is in three bytes that specify the encoding of the CSV:
$ diff <(xxd -c1 -p  original.csv <(xxd -c1 -p  saved-as-utf8.csv) 
0a1,3
> ef
> bb
> bf
Tell the software developer in charge of generating the CSV to correct it. As a quick workaround you can use gsed to insert the UTF-8 BOM at the beginning of the string:
gsed -i '1s/^\(\xef\xbb\xbf\)\?/\xef\xbb\xbf/' file.csv
This command inserts the UTF-8 BOM if not present. Therefore it is an idempotent command.

Saturday, December 02, 2017

JIRA revoke license to multiple users from the user interface

I needed to revoke application access to 400 old users that were imported from a different issue management system via CSV import. To my surprise the current JIRA cloud version expect us to click one user at a time when revoking licenses. Javascript to the rescue: Right click the page in chrome, select inspect, click console, paste the below and hit enter. It will click the revoke button every 3 seconds (adjust time depending on your needs):
var interval = setInterval(function() { document.getElementsByClassName('revoke-access')[1].click() }, 3000);
When done, run the below to stop the loop:
clearInterval(interval);

Followers