Tuesday, December 30, 2014

How to calculate SIFT/SURF with Bag-of-Words Model

This article gives a brief introduction about how to calculate the SIFT/SURF descriptors with Bag-of-Words(features) model:

http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O

http://www.codeproject.com/Tips/656906/Bag-of-Features-Descriptor-on-SURF-and-ORB-Feature

Thursday, August 7, 2014

emulating a browser in python with mechanize

Emulating a Browser in Python with mechanize

http://stockrt.github.io/p/emulating-a-browser-in-python-with-mechanize/
 
Posted by Rogério Carvalho Schneider
16 Aug 2009


It is always useful to know how to quickly instantiate a browser in the command line or inside your python scripts.
Every time I need to automate any task regarding web systems I do use this recipe to emulate a browser in python:
import mechanize
import cookielib

# Browser
br = mechanize.Browser()

# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)

# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)

# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)

# Want debugging messages?
#br.set_debug_http(True)
#br.set_debug_redirects(True)
#br.set_debug_responses(True)

# User-Agent (this is cheating, ok?)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
Now you have this br object, this is your browser instance. With this its possible to open a page, to inspect or to interact with:
# Open some site, let's pick a random one, the first that pops in mind:
r = br.open('http://google.com')
html = r.read()

# Show the source
print html
# or
print br.response().read()

# Show the html title
print br.title()

# Show the response headers
print r.info()
# or
print br.response().info()

# Show the available forms
for f in br.forms():
    print f

# Select the first (index zero) form
br.select_form(nr=0)

# Let's search
br.form['q']='weekend codes'
br.submit()
print br.response().read()

# Looking at some results in link format
for l in br.links(url_regex='stockrt'):
    print l
If you are about to access a password protected site (http basic auth):
# If the protected site didn't receive the authentication data you would
# end up with a 410 error in your face
br.add_password('http://safe-site.domain', 'username', 'password')
br.open('http://safe-site.domain')
Thanks to the Cookie Jar we’ve added before, you do not have to bother about session handling for authenticated sites, as in when you are accessing a service that requires a POST (form submit) of user and password. Usually they ask your browser to store a session cookie and expects your browser to contain that same cookie when re-accessing the page. All this, storing and re-sending the session cookies, is done by the Cookie Jar, neat!
You can also manage with browsing history:
# Testing presence of link (if the link is not found you would have to
# handle a LinkNotFoundError exception)
br.find_link(text='Weekend codes')

# Actually clicking the link
req = br.click_link(text='Weekend codes')
br.open(req)
print br.response().read()
print br.geturl()

# Back
br.back()
print br.response().read()
print br.geturl()
Downloading a file:
# Download
f = br.retrieve('http://www.google.com.br/intl/pt-BR_br/images/logo.gif')[0]
print f
fh = open(f)
Setting a proxy for your http navigation:
# Proxy and user/password
br.set_proxies({"http": "joe:password@myproxy.example.com:3128"})

# Proxy
br.set_proxies({"http": "myproxy.example.com:3128"})
# Proxy password
br.add_proxy_password("joe", "password")
But, if you just want to quickly open an webpage, without the fancy features above, just issue that:
# Simple open?
import urllib2
print urllib2.urlopen('http://stockrt.github.com').read()

# With password?
import urllib
opener = urllib.FancyURLopener()
print opener.open('http://user:password@stockrt.github.com').read()
See more in Python mechanize site , mechanize docs and ClientForm docs.
Also, I have made this post to elucidate how to handle html forms and sessions with python mechanize and BeautifulSoup

Tuesday, August 5, 2014

JAVA large volume DNS query problem.

http://stackoverflow.com/questions/11955409/non-blocking-async-dns-resolving-in-java

Is there a clean way to resolve a DNS query (get IP by hostname) in Java asynchronously, in non-blocking way (i.e. state machine, not 1 query = 1 thread - I'd like to run tens of thousands queries simultaneously, but not run tens of thousands of threads)?
What I've found so far:
  • Standard InetAddress.getByName() implementation is blocking and looks like standard Java libraries lack any non-blocking implementations.
  • Resolving DNS in bulk question discusses similar problem, but the only solution found is multi-threaded approach (i.e. one thread working on only 1 query in every given moment of a time), which is not really scalable.
  • dnsjava library is also blocking only.
  • There are ancient non-blocking extensions to dnsjava dating from 2006, thus lacking any modern Java concurrency stuff such as Future paradigm usage and, alas, very limited queue-only implementation.
  • dnsjnio project is also an extension to dnsjava, but it also works in threaded model (i.e. 1 query = 1 thread).
  • asyncorg seems to be the best available solution I've found so far targeting this issue, but:
    • it's also from 2007 and looks abandoned
    • lacks almost any documentation/javadoc
    • uses lots of non-standard techniques such as Fun class
Any other ideas/implementations I've missed?
Clarification. I have a fairly large (several TB per day) amount of logs. Every log line has a host name that can be from pretty much anywhere around the internet and I need an IP address for that hostname for my further statistics calculations. Order of lines doesn't really matter, so, basically, my idea is to start 2 threads: first to iterate over lines:
  • Read a line, parse it, get the host name
  • Send a query to DNS server to resolve a given host name, don't block for answer
  • Store the line and DNS query socket handle in some buffer in memory
  • Go to the next line
And a second thread that will:
  • Wait for DNS server to answer any query (using epoll / kqueue like technique)
  • Read the answer, find which line it was for in a buffer
  • Write line with resolved IP to the output
  • Proceed to waiting for the next answer
A simple model implementation in Perl using AnyEvent shows me that my idea is generally correct and I can easily achieve speeds like 15-20K queries per second this way (naive blocking implementation gets like 2-3 queries per second - just the sake of comparison - so that's like 4 orders of magnitude difference). Now I need to implement the same in Java - and I'd like to skip rolling out my own DNS implementation ;)

Overview of RAMFS and TMPFS on Linux--by Ramesh Natarajan on November 6, 2008

Using ramfs or tmpfs you can allocate part of the physical memory to be used as a partition. You can mount this partition and start writing and reading files like a hard disk partition. Since you’ll be reading and writing to the RAM, it will be faster.

When a vital process becomes drastically slow because of disk writes, you can choose either ramfs or tmpfs file systems for writing files to the RAM.


Both tmpfs and ramfs mount will give you the power of fast reading and writing files from and to the primary memory. When you test this on a small file, you may not see a huge difference. You’ll notice the difference only when you write large amount of data to a file with some other processing overhead such as network.

1. How to mount Tmpfs

# mkdir -p /mnt/tmp

# mount -t tmpfs -o size=20m tmpfs /mnt/tmp
The last line in the following df -k shows the above mounted /mnt/tmp tmpfs file system.
# df -k
Filesystem      1K-blocks  Used     Available Use%  Mounted on
/dev/sda2       32705400   5002488  26041576  17%   /
/dev/sda1       194442     18567    165836    11%   /boot
tmpfs           517320     0        517320    0%    /dev/shm
tmpfs           20480      0        20480     0%    /mnt/tmp

2. How to mount Ramfs

# mkdir -p /mnt/ram

# mount -t ramfs -o size=20m ramfs /mnt/ram
The last line in the following mount command shows the above mounted /mnt/ram ramfs file system.
# mount
/dev/sda2 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
tmpfs on /mnt/tmp type tmpfs (rw,size=20m)
ramfs on /mnt/ram type ramfs (rw,size=20m)
You can mount ramfs and tmpfs during boot time by adding an entry to the /etc/fstab.

3. Ramfs vs Tmpfs

Primarily both ramfs and tmpfs does the same thing with few minor differences.

  • Ramfs will grow dynamically.  So, you need control the process that writes the data to make sure ramfs doesn’t go above the available RAM size in the system. Let us say you have 2GB of RAM on your system and created a 1 GB ramfs and mounted as /tmp/ram. When the total size of the /tmp/ram crosses 1GB, you can still write data to it.  System will not stop you from writing data more than 1GB. However, when it goes above total RAM size of 2GB, the system may hang, as there is no place in the RAM to keep the data.
  • Tmpfs will not grow dynamically. It would not allow you to write more than the size you’ve specified while mounting the tmpfs. So, you don’t need to worry about controlling the process that writes the data to make sure tmpfs doesn’t go above the specified limit. It may give errors similar to “No space left on device”.
  • Tmpfs uses swap.
  • Ramfs does not use swap.

4. Disadvantages of Ramfs and Tmpfs

Since both ramfs and tmpfs is writing to the system RAM, it would get deleted once the system gets rebooted, or crashed. So, you should write a process to pick up the data from ramfs/tmpfs to disk in periodic intervals. You can also write a process to write down the data from ramfs/tmpfs to disk while the system is shutting down. But, this will not help you in the time of system crash.
Table: Comparison of ramfs and tmpfs
Experimentation Tmpfs Ramfs
Fill maximum space and continue writing Will display error Will continue writing
Fixed Size Yes No
Uses Swap Yes No
Volatile Storage Yes Yes

If you want your process to write faster, opting for tmpfs is a better choice with precautions about the system crash.

This article was written by SathiyaMoorthy. He is working at bksystems, interested in writing articles and  contribute to open source in his leisure time. The Geek Stuff welcomes your tips and guest articles.

Sunday, August 3, 2014

linux, process go to "S" status for a while. socket programming problem.


You could try and trace the system calls and signals of one of the concerned processes.
Maybe you'll find a hint on what's goung on.

strace -p pid

where pid is the process id as found in the second column of "ps -ef".

You could add the "-f" flag to trace forked child processes as well:

strace -fp pid


Checking a strace -fp pid as suggested, I'm getting a huge amount of the following messages until I interrupt the command:

==========================

======================
strace -fp 30247
Process 30247 attached - interrupt to quit
SYS_7(0x3ffffaf9078, 0, 0xc350, 0, 0, 0x3ffffafc070, 0x800291fc, 0, 0x2000336dc40, 0x3ffffaf9088, 0x2000076e000, 0x20000741518, 0x200006e7840, 0x3ffffaf8fd8, 0x200007a7f30, 0, 0, 0, 0, 0, 0, 0, 0x3ffffaf9078, 0x8000000000000, 0x4050000000000000, 0, 0, 0, 0x4050000000000000, 0, 0, 0) = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000}, NULL)          = 0
poll([{fd=3, events=POLLIN|POLLERR|POLLHUP, revents=POLLIN}], 1, 60000) = 1
recv(3, "", 8192, MSG_DONTWAIT)         = 0
nanosleep({0, 50000000},  <unfinished ...>
Process 30247 detached
================================================

Any ideas? :S


all one can see is that the program polls for an event on filedescriptor 3 (which is obviously a socket), with a timeout of 60 seconds.
A POLLIN event is received, which means "there is data to read", and the return value of "1" means that one single structure  has been returned (the one indicating "POLLIN").

The subsequent nonblocking recv() receives a message of length zero from that socket, so the process decides to go to sleep for a while to then reissue the poll() call.

So the reason why this process passes most of its time sleeping is that the socket becomes well ready to present data, but these data are of zero length - obviously either a communications problem or desired behaviour - maybe the other end of the communication path would post an event regularly to keep your process from timing out.

You could additionally issue

lsof -p pid

to check where file descriptor (socket) 3 is connected to. The descriptor number is in the FD column, connection info is in the NAME column.

If the process whose data you posted has ended in the meantime and you're going to examine a different process please make sure to check for the correct socket descriptor - that's the "fd=..." number in the poll() call.

In any case (desired behaviour or communications problem) you should see your development folks to present this analysis to them and ask them what's the deal.

Thursday, July 31, 2014

solve: "X11 forwarding request failed on channel 0" problem after disable ipv6 of centos

1. Started sshd in debug mode (sudo rc.d stop sshd, sudo /usr/sbin/sshd -d)
2. Noticed Failed to allocate internet-domain X11 display socket. in debugging output
3. The page http://forums.fedoraforum.org/showthread.php?t=270333 indicates a possible relation to IPv6 being disabled.
4. Checked sysctl net.ipv6.conf.all.disable_ipv6, and indeed, IPv6 was disabled.
5. Re-enabled by undoing https://wiki.archlinux.org/index.php/IPv6_-_Disabling_the_Module
5'. Alternatively, adding AddressFamily inet to /etc/ssh/sshd_config would have also worked


comments: the (5) is good if you have to disable ipv6.

Monday, July 28, 2014

install EPEL repor for Centos

rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
 
reference:
https://fedoraproject.org/wiki/EPEL/FAQ#How_can_I_install_the_packages_from_the_EPEL_software_repository.3F 

Friday, June 13, 2014

multi functions in one matlab M file.

he first function in an m-file (i.e. the main function), is invoked when that m-file is called. It is not required that the main function have the same name as the m-file, but for clarity it should. When the function and file name differ, the file name must be used to call the main function.
All subsequent functions in the m-file, called local functions (or "subfunctions" in the older terminology), can only be called by the main function and other local functions in that m-file. Functions in other m-files can not call them.
In addition, you can also declare functions within other functions. These are called nested functions, and these can only be called from within the function they are nested. They can also have access to variables in functions in which they are nested, which makes them quite useful albeit slightly tricky to work with.
More food for thought...
There are ways around the normal function scoping behavior outlined above, such as passing function handles as output arguments as mentioned in Jonas' answer. However, I wouldn't suggest making it a habit of resorting to such tricks, as there are likely much better options for organizing your files.
For example, let's say you have a main function A in an m-file A.m, along with local functions D, E, and F. Now let's say you have two other related functions B and C in m-files B.m and C.m, respectively, that you also want to be able to call D, E, and F. Here are some options you have:
  • Put D, E, and F each in their own separate m-files, allowing any other function to call them. The downside is that the scope of these functions is large and isn't restricted to just A, B, and C, but the upside is that this is quite simple.
  • Create a defineMyFunctions m-file (like in Jonas' example) with D, E, and F as local functions and a main function that simply returns function handles to them. This allows you to keep D, E, and F in the same file, but it doesn't do anything regarding the scope of these functions since any function that can call defineMyFunctions can invoke them. You also then have to worry about passing the function handles around as arguments to make sure you have them where you need them.
  • Copy D, E and F into B.m and C.m as local functions. This limits the scope of their usage to just A, B, and C, but makes updating and maintenance of your code a nightmare because you have three copies of the same code in different places.
  • Use private functions! If you have A, B, and C in the same directory, you can create a subdirectory called private and place D, E, and F in there, each as a separate m-file. This limits their scope so they can only be called by functions in the directory immediately above (i.e. A, B, and C) and keeps them together in the same place (but still different m-files):
    myDirectory/
        A.m
        B.m
        C.m
        private/
            D.m
            E.m
            F.m
All this goes somewhat outside the scope of your question, and is probably more detail than you need, but I thought it might be good to touch upon the more general concern of organizing all of your m-files. ;)

Tuesday, April 15, 2014

linux, find a line in a file and delete it, by using "sed"

 Q: need to grep for a particular 'string' in a file and remove the entire line where the occurrence of the string is found. I want it to work across with a collection of files. Can you help?
A:  It is possible to use grep for this: grep -v string file will output all lines that do not contain the string. But sed is a more suitable tool for batch editing.
sed --in-place '/some string/d' myfile
will delete all lines containing 'some string' To process a collection of files, you need to use a for loop (or find) because sed 's --in-place option only works on single files. One of these commands will do it:
for f in *.txt; do sed --in-place '/some string/d'
"$f"; done
find -name '*.txt' -exec sed --in-place=.bak '/some
string/d' "{}" ';'
Adding =.bak in the latter example makes sed save a backup of the original file before modifying it.

Saturday, April 5, 2014

VIM: Insert a string at the beginning of each line

This replaces the beginning of each line with "//":
:%s!^!//!
This replaces the beginning of each selected line (use visual mode to select) with "//":
:'<,'>s!^!//!
Refer:
http://stackoverflow.com/questions/253380/how-do-i-insert-text-at-beginning-of-a-multi-line-selection-in-vi-vim

Tuesday, April 1, 2014

setup Rsyslog and Mongodb on Centos 6.3

I did some modify and correct some errors from following  links.

The reference links:
1)http://loganalyzer.adiscon.com/articles/using-mongodb-with-rsyslog-and-loganalyzer
2http://wiki.rsyslog.com/index.php/HOWTO:_install_rsyslog_%2B_mongodb_%2B_loganalyzer_in_RHEL_6.2
3)http://wiki.rsyslog.com/index.php/Rsyslog_v6_configuration_example


//make sure that you install the EPEL source.
http://fedoraproject.org/wiki/EPEL/FAQ#How_can_I_install_the_packages_from_the_EPEL_software_repository.3F



Install Apache + PHP:
----------------------
$yum install httpd php
$chkconfig httpd on
$service httpd start


Install Adiscon Loganalyzer:
-----------------------------
$yum install php-bcmath php-gd
Download, install and configure Adiscon Loganalyzer as documented in the INSTALL file to "/var/www/html". You can find the INSTALL file in the loganalyzer sources.
// cp all the folders and files from "/src" to "/var/www/html"


Install MongoDB:
----------------
$yum install mongodb mongodb-server php-pecl-mongo
$chkconfig mongod on
$service mongod start

Install the follow three packages with order: libestr -> libee -> liblognorm.
The reason why we need to build them from source is that we need to compile the Rsyslog by ourself.

Install libee, libestr, liblognorm:
------------------------------------
$yum install gcc make pkgconfig
Download, compile and install sources with:
$./configure --libdir=/usr/lib64/ --includedir=/usr/include --prefix=/usr
$make
$make install

Install libmongo-clientInstall libmongo-client:
------------------------------
$yum install git automake autoconf libtool glib2-devel
$git clone git://github.com/algernon/libmongo-client.git
$cd libmongo-client
$./autogen.sh
$./configure --libdir=/usr/lib64/ --includedir=/usr/include --prefix=/usr
$make
$make install
you may need following packages:
yum install json-c-devel libuuid libuuid-devel 

Install rsyslog:

yum install rsyslog-mongodb


manage Latex package manually on ubuntu

1. ubuntu doesn't support Latex automatic package system. they suggest you to manage package manually.
http://tex.stackexchange.com/questions/73116/fresh-install-texlive-2012-ubuntu-12-04-tlmgr-nowhere-to-be-found

2) here is the instruction for Manual management.
Installing packages manually
If a package you desire is not in Ubuntu's repositories, you may look on CTAN's web site or TeX Catalogue Online to see if they have the package. If they do, download the archive containing the files. In this example, we'll install example package foo, contained in foo.tar.gz.

Once foo.tar.gz has finished downloading, we unzip it somewhere in our home directory:


tar xvf foo.tar.gz
This expands to folder foo/. We cd into foo/ and see foo.ins. We now run LaTeX on the file:


latex foo.ins
This will generate foo.sty. We now have to copy this file into the correct location. This can be done in two ways. After these, you can use your new package in your LaTeX document by inserting \usepackage{foo} in the preamble.

User install

We will copy this into our personal texmf tree. The advantages of this solution are that if we migrate our files to a new computer, we will remember to take our texmf tree with us, resulting in keeping the same packages we had. The disadvantages are that if multiple users want to use the same packages, the tree will have to be copied to each user's home folder.

We'll first create the necessary directory structure:


cd ~
mkdir -p texmf/tex/latex/foo
Notice that the final directory created is labeled foo. It is a good idea to name directories after the packages they contain. The -p attribute to mkdir tells it to create all the necessary directories, since they don't exist. Now, using either the terminal, or the file manager, copy foo.sty into the directory labeled foo.

Now, we must make LaTeX recognize the new package:


texhash ~/texmf
System install

We will copy the foo to the LaTeX system tree. The advantages are that every user on the computer can access these files. The disadvantages are, that the method uses superuser privileges, and in a possible reformat/reinstall you have to repeat the procedure.

First, go to the folder your foo is located. The following commands will create a new directory for your files and copy it to the new folder:


sudo mkdir /usr/share/texmf/tex/latex/foo
sudo cp foo.sty /usr/share/texmf/tex/latex/foo
Then update the LaTeX package cache:


sudo texhash

Wednesday, March 26, 2014

change Font size of matlab plot

set(findall(figurehandle,'type','text'),'fontSize',16)

Sunday, March 23, 2014

how to debug a Maven project

use remote debugging

mvn exec:exec -Dexec.executable="java" -Dexec.args="-classpath %classpath -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1044 com.mycompany.app.App"


Then in your eclipse, you can use remote debugging and attach the debugger to localhost:1044.

Tuesday, February 4, 2014

chkconfig replavement in Ubuntu

http://www.debuntu.org/how-to-managing-services-with-update-rc-d/

How-To: Managing Services With Update-Rc.D

Posted by chantra on July 5th, 2007
Linux services can be started, stopped and reloaded with the use of scripts stocked in /etc/init.d/.
However, during start up or when changing runlevel, those scripts are searched in /etc/rcX.d/ where X is the runlevel number.
This tutorial will explain how one can activate, deactivate or modify a service start up.
When installing a new service under debian, the default is to enable it. So for instance, if you just installed apache2 package, after you installed it, apache service will be started and so will it be upon the next reboots.
If you do not use apache all the time, you might want to disable this service from starting up upon boot up and simply start it manually when you actually need it by running this command:
# /etc/init.d/apache2 start
You could either disable this service on boot up by removing any symbolic links in /etc/rcX.d/SYYapache2 or by using update-rc.d.
The advantage of using update-rc.d is that it will take care of removing/adding any required links to /etc/init.d automatically.
Taking apache2 as an example, let's examine how /etc/rcX.d is looking like:
# ls -l /etc/rc?.d/*apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc0.d/K91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc1.d/K91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc2.d/S91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc3.d/S91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc4.d/S91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc5.d/S91apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 17 2007-07-05 22:51 /etc/rc6.d/K91apache2 -> ../init.d/apache2
As you can see, for runlevels 0, 1 and 6 there is a K at the beginning of the link, for runlevels 2, 3, 4 and 5, there is a S. Those two letters stands for Kill and Start.
On Debian and Ubuntu, runlevels 2, 3, 4 and 5 are multi-users runlevels.
Runlevel 0 is Halt.
Runlevel 1 is single user mode
Runlevel 6 is reboot

1. Removing A Service

If you want to totally disable apache2 service by hand, you would need to delete every single link in /etc/rcX.d/. Using update-rc.d it is as simple as:
# update-rc.d -f apache2 remove
The use of -f is to force the removal of the symlinks even if there is still /etc/init.d/apache2.
Note: This command will only disable the service until next time the service is upgraded. If you want to make sure the service won't be re-enabled upon upgrade, you should also type the following:
# update-rc.d apache2 stop 80 0 1 2 3 4 5 6 .

2. Adding A Service

2.1. Default Priorities

Now, if you want to re-add this service to be started on boot up, you can simply use:
# update-rc.d apache2 defaults
Adding system startup for /etc/init.d/apache2 ...
/etc/rc0.d/K20apache2 -> ../init.d/apache2
/etc/rc1.d/K20apache2 -> ../init.d/apache2
/etc/rc6.d/K20apache2 -> ../init.d/apache2
/etc/rc2.d/S20apache2 -> ../init.d/apache2
/etc/rc3.d/S20apache2 -> ../init.d/apache2
/etc/rc4.d/S20apache2 -> ../init.d/apache2
/etc/rc5.d/S20apache2 -> ../init.d/apache2

2.2. Custom Priorities

But as you can see, the default value is 20 which is pretty different than 91 ... a S20 link is started before a S91 and and K91 is kill before K20.
To force apache2 to be started with priorities 91 for both Start and Kill, we need to use the following command:
# update-rc.d apache2 defaults 91
Adding system startup for /etc/init.d/apache2 ...
/etc/rc0.d/K91apache2 -> ../init.d/apache2
/etc/rc1.d/K91apache2 -> ../init.d/apache2
/etc/rc6.d/K91apache2 -> ../init.d/apache2
/etc/rc2.d/S91apache2 -> ../init.d/apache2
/etc/rc3.d/S91apache2 -> ../init.d/apache2
/etc/rc4.d/S91apache2 -> ../init.d/apache2
/etc/rc5.d/S91apache2 -> ../init.d/apache2

2.3. Different Priorities For Start And Kill

Alternatively, if you want to set different priorities for Start than for Kill, let say Start with 20 and Kill with 80, you will need to run:
# update-rc.d apache2 defaults 20 80
Adding system startup for /etc/init.d/apache2 ...
/etc/rc0.d/K80apache2 -> ../init.d/apache2
/etc/rc1.d/K80apache2 -> ../init.d/apache2
/etc/rc6.d/K80apache2 -> ../init.d/apache2
/etc/rc2.d/S20apache2 -> ../init.d/apache2
/etc/rc3.d/S20apache2 -> ../init.d/apache2
/etc/rc4.d/S20apache2 -> ../init.d/apache2
/etc/rc5.d/S20apache2 -> ../init.d/apache2

3. Specifying Custom Runlevels

Finally, if you only want to Start and Kill on specific runlevels, like for instance starting apache with priority 20 on runlevels 2, 3, 4 and 5 and Kill with priority 80 on runlevels 0, 1 and 6:
# update-rc.d apache2 start 20 2 3 4 5 . stop 80 0 1 6 .
Adding system startup for /etc/init.d/apache2 ...
/etc/rc0.d/K80apache2 -> ../init.d/apache2
/etc/rc1.d/K80apache2 -> ../init.d/apache2
/etc/rc6.d/K80apache2 -> ../init.d/apache2
/etc/rc2.d/S20apache2 -> ../init.d/apache2
/etc/rc3.d/S20apache2 -> ../init.d/apache2
/etc/rc4.d/S20apache2 -> ../init.d/apache2
/etc/rc5.d/S20apache2 -> ../init.d/apache2
Or, to start with priority 20 for runlevel 2, 3 and 4 and priority 30 for runlevel 5 and kill with priority 80 for runlevel 0, 1 and 6:
# update-rc.d apache2 start 20 2 3 4 . start 30 5 . stop 80 0 1 6 .
Adding system startup for /etc/init.d/apache2 ...
/etc/rc0.d/K80apache2 -> ../init.d/apache2
/etc/rc1.d/K80apache2 -> ../init.d/apache2
/etc/rc6.d/K80apache2 -> ../init.d/apache2
/etc/rc2.d/S20apache2 -> ../init.d/apache2
/etc/rc3.d/S20apache2 -> ../init.d/apache2
/etc/rc4.d/S20apache2 -> ../init.d/apache2
/etc/rc5.d/S30apache2 -> ../init.d/apache2

Wednesday, January 29, 2014

Configuring Supermicro IPMI interface NIC using ipmitool

Newer Supermicro IPMI interfaces come configured by default in “failover” mode which means that the IPMI will bind to either the dedicated IPMI NIC port or share with one the the machine NIC ports.
This can cause IPMI to come up on wrong NIC and hence be inaccessible if the dedicated NIC doesn’t detect a link.
You can use ipmitool to change this behavour
First query the current setting:
ipmitool raw 0x30 0x70 0x0c 0
The result will be one of the following
0x00 = Dedicated
0x01 = Onboard / Shared
0x02 = Failover

Next to configure it you can use one of the following.
For older models:
ipmitool raw 0x30 0x70 0x0c 1 1 0
For X9 motherboards:
ipmitool raw 0x30 0x70 0x0c 1 0
References for this can be found here:
http://www.supermicro.com/support/faqs/faq.cfm?faq=9829
http://www.supermicro.com/support/faqs/faq.cfm?faq=14417

Thursday, January 23, 2014

eth* and em*

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/appe-Consistent_Network_Device_Naming.html

Monday, January 13, 2014

Difference bewteen Pylab and pyplot

Matplotlib is the whole package; pylab is a module in matplotlib that gets installed alongsidematplotlib; and matplotlib.pyplot is a module in matplotlib.
Pyplot provides the state-machine interface to the underlying plotting library in matplotlib. This means that figures and axes are implicitly and automatically created to achieve the desired plot. For example, calling plot from pyplot will automatically create the necessary figure and axes to achieve the desired plot. Setting a title will then automatically set that title to the current axes object:
Pylab combines the pyplot functionality (for plotting) with the numpy functionality (for mathematics and for working with arrays) in a single namespace, making that namespace (or environment) even more MATLAB-like. For example, one can call the sin and cos functions just like you could in MATLAB, as well as having all the features of pyplot.
The pyplot interface is generally preferred for non-interactive plotting (i.e., scripting). Thepylab interface is convenient for interactive calculations and plotting, as it minimizes typing. Note that this is what you get if you use the ipython shell with the -pylab option, which imports everything from pylab and makes plotting fully interactive.