I was checking out Eureka to see if we can apply it to Hoiio’s microservice. I downloaded the war from maven central, unzip it, tweak a few parameters and deployed it to aws. The version I was using was 1.4.4
Eureka recommended that we use EIPs for the server, but I didn’t want to expose my Eureka servers to public and since all of our servers were in one AWS region so I put them in my VPC and wrote a script to update the Route 53 configuration on instance launch instead.
So everything went smooth. After Eureka was deployed, I implemented a few services, registered them with Eureka and let them talk to each others. Until one day, I wanted to change several configs and restarted the Eureka servers. Things started to went wrong.
I have this collectd cluster configured inside VPC. The metrics are sent to a influxdb instance, then visualized through grafana which is behind a google auth proxy for public access.
| grafana <– influxdb <– node with collectd |
The connection between them are not secured since they are already inside VPC. Now I have these legacy instances which are classic EC2 instances that want to have their metrics on Grafana too. I have 2 choices:
I have this experience today that I think worth sharing.
So we have this microservice system based on message passing on rabbitmq. For the deployment of each module, I wrote a few scripts to make sure each version of a module is placed on a different autoscaling group.
For each deployment, we keep the 2 versions of the module running for a period of time to make sure the new version does not break the system, then we stop the old version by scaling down the autoscaling group of the old version to 0.
The problem is I set rabbitmq timeout pretty high, and when autoscaling group terminates an instance on scaling down, the tcp connection from that instance to rabbitmq does not close until a timeout. RabbitMQ still delivers message to that dead connection, causing timeout for the user request since there are no consumer at the other end.
Our lead devops engineer has left the company leaving me the only devops left. During his last month at the company, we interviewed a few guys for his replacement. Except for one guy who was ok at scripting, the other candidates can hardly write bash scripts. Unfortunately, the guy didn’t have knowledge on distributed systems so we couldn’t hire him.
I’m not saying that sysadmins who couldn’t scripts are bad ones. As long as you can get the job done, I don’t care what tool you use. I’m only disappointed sysadmins/devopses that I interviewed do not think that scripting is useful.
Recently, my gf has to work on a project where she organizes games on by posting questions to facebook pages. Page’s followers who answer correctly and with a bit of luck will receive a small gift. The game is part of an ads campaign for her customer.
Typical comment for a sport game score guessing with a lucky number at the end
I had my first surgery last Saturday. It was a small tumor in my neck. Sound serious? In fact, it’s not. I had that for 2 years, everything was fine, just my thyroid (?) decided to grow a little bigger than normal and the doctor recommended to just cut away a small part of it.
Anyway, the surgery lasted for 2 hours straight! I had to stay in the hospital for 2 days. The cut doesn’t hurt at all, maybe they gave me too much painkiller. It was a experience to remember though.
So I had to do a database migration which concerns importing massively external data into current production database. Since I have already implemented quite a few methods to do all validation, duplicate handling and so on using django orm, I don’t want to write those code again using raw mysql commands. It would be a pain if I forget to set a certain field which breaks the system later on. I decided to look around on the internet for help to import existing django project to my script. It turns out to be straight foward (thankyou internet!)
Basically, I needed to add my project path to system path and tell python to import my project settings:
from <app_name>.models import *
from <app_name>.views import <whatever method you need>
For my case, I wanted to run a multithread process to boost the data processing speed on my quad-core cpu, so I needed to close the django db connection before invoking the threads. The reason is django only uses a single connection for all threads and this causes confict between them. By explicitly closing db connection before starting a thread, each thread will create it own connection.
from django import db
for alias, info in db.connections.databases.items():
//start your thread
My second post today. While writing the previous post paragraph about browsing files with symlink, I though maybe I should share few overridden commands that might be useful.
I find very often after a cd, I immediately use a ls to see the list of files in the new directory, and also after I remove a file so I put it there.
For the rm, I don’t want to mistakenly remove an important file so I put a confirmation before removing the file if they are in an essential directory.
The topdisk and showpid are just to shorten a pipe that I use a lot since I only have a 32GB SSD.
Sorry for the indentation, I don’t see the option to keep the indentation with this wordpress editor. If you use vim as the editor, you can select the code and use the command ‘=’ to automatically indent these code. Put these in your .bashrc and they are there when you start a new terminal.
Although it might seem straight forward, for a man who started using computer with Windows as the OS, I cannot help feeling amazed by the convinience of symbolic link on unix brings.
Most of the time I work on the shell, including browsing files. Symbolic link helps the browsing so much faster while keeping the directory structure so clean. The best thing is we can also use the symbolic link in path name to feed to the application without any issue. For example, I have this setup of 32GB SSD where I install the OS and the application packages and a 512GB HDD where I store data. The problem is I store pretty much data on the /home/ directory where I also have some binaries like .rvm/ or caches that I want to run on SSD. After sometimes, the the little 32GB SSD fills up and I have no space left so I simply migrate the /home directory to the HDD, then link the ~/.rvm, ~/.cache,… to /home/bin/ where I put the actual binaries files. It works without any issues.
I have never been a sysadmin, however I had to work really much on that sort of things recently. That’s the thing when you are an intern at a start-up! I had to work on almost every position: sysadmin, back end dev, front end dev, multimedia, web, mobile dev. It’s kinda fun in fact, and I have learned so much.
So we have this service that is running on a powerful server. Unfortunately, it has to handle too many things at the same time. We tried to distribute many other jobs to different servers (the first day when I came, the company only had 1 server, now we have 13!), but itself alone does not have enough resource to handle the workload that we are going to have when our new customer will start their campaign a bit later.