I was checking out Eureka to see if we can apply it to Hoiio’s microservice. I downloaded the war from maven central, unzip it, tweak a few parameters and deployed it to aws. The version I was using was 1.4.4
Eureka recommended that we use EIPs for the server, but I didn’t want to expose my Eureka servers to public and since all of our servers were in one AWS region so I put them in my VPC and wrote a script to update the Route 53 configuration on instance launch instead.
So everything went smooth. After Eureka was deployed, I implemented a few services, registered them with Eureka and let them talk to each others. Until one day, I wanted to change several configs and restarted the Eureka servers. Things started to went wrong.
I have this collectd cluster configured inside VPC. The metrics are sent to a influxdb instance, then visualized through grafana which is behind a google auth proxy for public access.
| grafana <– influxdb <– node with collectd |
The connection between them are not secured since they are already inside VPC. Now I have these legacy instances which are classic EC2 instances that want to have their metrics on Grafana too. I have 2 choices:
I have this experience today that I think worth sharing.
So we have this microservice system based on message passing on rabbitmq. For the deployment of each module, I wrote a few scripts to make sure each version of a module is placed on a different autoscaling group.
For each deployment, we keep the 2 versions of the module running for a period of time to make sure the new version does not break the system, then we stop the old version by scaling down the autoscaling group of the old version to 0.
The problem is I set rabbitmq timeout pretty high, and when autoscaling group terminates an instance on scaling down, the tcp connection from that instance to rabbitmq does not close until a timeout. RabbitMQ still delivers message to that dead connection, causing timeout for the user request since there are no consumer at the other end.
Our lead devops engineer has left the company leaving me the only devops left. During his last month at the company, we interviewed a few guys for his replacement. Except for one guy who was ok at scripting, the other candidates can hardly write bash scripts. Unfortunately, the guy didn’t have knowledge on distributed systems so we couldn’t hire him.
I’m not saying that sysadmins who couldn’t scripts are bad ones. As long as you can get the job done, I don’t care what tool you use. I’m only disappointed sysadmins/devopses that I interviewed do not think that scripting is useful.
My second post today. While writing the previous post paragraph about browsing files with symlink, I though maybe I should share few overridden commands that might be useful.
I find very often after a cd, I immediately use a ls to see the list of files in the new directory, and also after I remove a file so I put it there.
For the rm, I don’t want to mistakenly remove an important file so I put a confirmation before removing the file if they are in an essential directory.
The topdisk and showpid are just to shorten a pipe that I use a lot since I only have a 32GB SSD.
Sorry for the indentation, I don’t see the option to keep the indentation with this wordpress editor. If you use vim as the editor, you can select the code and use the command ‘=’ to automatically indent these code. Put these in your .bashrc and they are there when you start a new terminal.
Although it might seem straight forward, for a man who started using computer with Windows as the OS, I cannot help feeling amazed by the convinience of symbolic link on unix brings.
Most of the time I work on the shell, including browsing files. Symbolic link helps the browsing so much faster while keeping the directory structure so clean. The best thing is we can also use the symbolic link in path name to feed to the application without any issue. For example, I have this setup of 32GB SSD where I install the OS and the application packages and a 512GB HDD where I store data. The problem is I store pretty much data on the /home/ directory where I also have some binaries like .rvm/ or caches that I want to run on SSD. After sometimes, the the little 32GB SSD fills up and I have no space left so I simply migrate the /home directory to the HDD, then link the ~/.rvm, ~/.cache,… to /home/bin/ where I put the actual binaries files. It works without any issues.