mercredi 6 mai 2015

USB bulk read times out, protocol analyzer says data delivered

I'm trying to make an open source libusb driver for a piece of hardware. Unfortunately, a time and data accurate packet replay on Linux causes a certain bulk read to repeatably time out and not return the bulk data (LIBUSB_ERROR_TIMEOUT, URB status: -ENOENT). According to Wireshark, the Windows application is predictable and always returns this data when executed on VMWare Workstation through the same Linux laptop.

I used Wireshark to generate packet captures from both the Windows app and my app and, as far as I can tell, they are identical up to the error. I went a step further and borrowed a USB protocol analyzer (Total Phase Beagle 12) and captured data on the wire. My previous testing was at high speed but I was able to make an equivalent test case at full speed so that I can use the analyzer. It shows that the device is in fact correctly replying to the control request but somehow is not making it to userspace (or a URB for that matter).

A few packets before the error, there is a very similar packet (same bulk in request and same data returned). The python libusb code does not throw an error but the URB status is -EREMOTEIO. As far as I can tell, this may be normal and may simply be a difference between how VMWare and libusb set Linux kernel USB parameters. But this is still probably a hint: some different setting is making a big difference.

I tried this on a Linux 3.5 x64 and a Linux 3.18 x64 system under Ubuntu 12.04. If this looks like it could be a kernel bug I can setup a more recent kernel. Both systems have Intel 82801H USB controllers.

The device does some weird startup magic where it repeatably resets the device and clears some bulk endpoints. Regardless of whether I do this dance the error occurs later on.

I'm hoping someone has seen this before and can give me some pointers. Otherwise I imagine my next step would be to try to find workarounds (I might have a workaround but it creates other problems) or instrument Linux USB subsystem some more.

Data:

-Full Wireshark and Total Phase logs can be found here: http://ift.tt/1EPC2LT

-Full speed test code: bp1410_rst_12.py

-Total phase screenshot showing the request made it: failed_reply.png (from 12_04_mein_startup_cold.tdc)

-Wireshark screenshot (Windows reference): ws_ref.png

-Wireshark screenshot (my Linux libusb): ws_mine.png

'Error uncleared PCH FIFO underrun on transcoder A" during boot centos 7 on toshiba

I have toshiba Satelite model(C50 1001C model number) laptop, When I start It is giving

1.643361 [drm:cpt_serr_int_handler] *ERROR* uncleared PCH FIFO underrun on transcoder A 1.643363 [drm cpt_serr_int_handler] *ERROR* pch transcoder a fifo underrun

In this laptop, I have important installation and source code, I just search, I did note get correct solution.

Control the volume of Volumio

I have set up a Raspberry Pi with the Volumio player. Now I want to control the volume with a rotary encoder. And also want to pause or play the current song.

#!/usr/bin/env python
#
# Raspberry Pi Rotary Test Encoder Class
#
# Author : Bob Rathbone
# Site   : http://ift.tt/1sHHWM2
#
# This class uses a standard rotary encoder with push switch
#

import sys
import time
from rotary_class import RotaryEncoder
import subprocess

# Define GPIO inputs
PIN_A = 21  
PIN_B = 16  
BUTTON = 4

# This is the event callback routine to handle events
def switch_event(event):
    if event == RotaryEncoder.CLOCKWISE:
        print "Volume up"
        subprocess.call(['mpc', 'volume', '+1'])

    elif event == RotaryEncoder.ANTICLOCKWISE:
        print "Volume down"
        subprocess.call(['mpc', 'volume', '-1' ])

    elif event == RotaryEncoder.BUTTONDOWN:
        print "Pause/Play"      

#   elif event == RotaryEncoder.BUTTONUP:
#       print "Button up"
    return

# Define the switch
rswitch = RotaryEncoder(PIN_A,PIN_B,BUTTON,switch_event)

while True:
    time.sleep(0.5)

This is the coder I already have. But when I start it, and try to set the volume +1, I just get an error.

This one:

Traceback (most recent call last):
  File "/home/pi/radio/rotary_class.py", line 87, in switch_event
    self.callback(event)
  File "./test_rotary_class.py", line 35, in switch_event
    subprocess.call(['mpc', 'volume', '-1' ])
  File "/usr/lib/python2.7/subprocess.py", line 493, in call
    return Popen(*popenargs, **kwargs).wait()
  File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1259, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

It would be great if someone can help me and tell me how to do the pause/play :)

FDT and ATAGS support not compiled in - hanging ### ERROR ### Please RESET the board ###

I'm following the tutorial from the below mentioned website to install linux on SoCkit by Terasic:

http://ift.tt/1EPeG99.

This is my first time building a linux, so I am still learning. I was able to complete all the steps shown in the tutorial but when I try to boot it gives me error saying " Did not find a cmdline Flattened Device Tree Could not find a valid device tree". Now, I know the .dtb file is on the SD card and I can load it using the "fatload" command of u-boot. After I load the .dtb file and run 'bootm' command I get the error saying "FDT and ATAGS support not compiled in - hanging ### ERROR ### Please RESET the board ###"

I don't know where/how to enable this support. Could someone please help me with this.

Thank you, Karthik

mardi 5 mai 2015

How to remotely execute wiindows program in linux?

I have tried accessing windows machine shell from linux using link http://ift.tt/1gpKPJ0. It worked for me correctly. While using this i can able to ls or dir windows directory. But i can't execute python executable. It shows below error.

$ C:\\Python27\\python.exe
-bash: C:\Python27\python.exe: command not found

How can i solve this problem. Should i skip this part and instead focus on client-server socket programming for this task. Please let me know at the earliest.

How to highlight pair div or any other pair HTML tags in Gedit 3?

Gedit very useful, and fast editor. But I need highlight pair html tags like a sublime, bluefish, atom ...etc. I install zen plugin, but it's not help. How to fix it?

Django gunicorn nginx unidentified post request to some site

This is the part from gunicorn log file

[2015-05-06 11:48:17 +0000] [6197] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:18 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:19 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:20 +0000] [6188] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:20 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:21 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:22 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:22 +0000] [6197] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:23 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:24 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:25 +0000] [6197] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:25 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:26 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:27 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:28 +0000] [6194] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:28 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:29 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:30 +0000] [6182] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:30 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:31 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:32 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:33 +0000] [6182] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:33 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:34 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:35 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 11:48:35 +0000] [6188] [DEBUG] POST /composants/personnes/ajout_exterieur.cfc
[2015-05-06 08:18:36 +0000] [6173] [DEBUG] 9 workers
[2015-05-06 08:18:37 +0000] [6173] [DEBUG] 9 workers

it sends post to /composants/personnes/ajout_exterieur.cfc which i have no idea . A little bit googling yielded this site http://ift.tt/1GOhMsr but it is down , i suspect some sort of DDOS attack is involved and my server would be used too .. There is no suspicious code in my project and it is inside a virtualenv I have installed the following python modules

Django==1.8
Pillow==2.8.1
amqp==1.4.6
anyjson==0.3.3
argparse==1.2.1
billiard==3.3.0.19
celery==3.1.17
django-celery==3.1.16
django-crispy-forms==1.4.0
gunicorn==19.3.0
jsonfield==1.0.3
kombu==3.0.24
psycopg2==2.6
pycrypto==2.6.1
pytz==2015.2
redis==2.10.3
requests==2.6.2
wsgiref==0.1.2

i don't know where to look .Is this a security threat?? Thanks!

BlueZ hci_* API to make the host discoverable

Environment:

  • Linux
  • BlueZ Bluetooth stack
  • C API
  • No usage of the dbus interface

I must say that the HCI BlueZ C API ( hci_lib.h ) is poorly documented, having that said, Is there a C hci_* API controlling the host discover-able state? something similar to "hci_write_simple_pairing_mode" enabling control of discoverability?

Can I restrict log4.properties generated log file to be accessed with user credentials

I am using tomcat to host my web applications. I configured log4j.properties for each application and generating a log text file for every application successfully. I am even able to see the generated log file in the browser as the files of the application get done.

But the issue here is, I don't want every one to see that log file. I want to restrict to only specific user or group of users. That means my requirement looks nearly to a web page getting authenticated before getting accessed. So, is there any property in the log4j.properties file to configure for such requirement or even any where in the OS, I can cofigure to restrict the log file viewed by everyone?

Thanks in advance.

Bash script to recursively delete folders containing a file starting with a particular string

I need to create a script in linux that will recursively scan a folder and its subfolders, checks in the current folder for a particular file (always called create.info), and checks if the create.info file starts with a particular string. If yes, then the folder that contains that file and all it's contents should be deleted. Best would be if all this could be done using tools already available in SLES

How to send v2 traps in net snmp using c

I have a the following configurations:

  1. Trap oid = .1.3.6.1.4.1.78945.1.1.1.1.1
  2. Trap variable oid = .1.3.6.1.4.1.78945.1.1.2.1.0, variable type = string
  3. Another Trap variable oid = .1.3.6.1.4.1.78945.1.1.2.4.0, variable type = integer.
  4. Trap listener ip and port = 192.168.4.10:1234

How can I send traps using C or C++ and net-snmp module in linux? I need a sample code. All sample codes at net-snmp site did not work for me.

my sample code:

#include <net-snmp/net-snmp-config.h>
#include <net-snmp/net-snmp-includes.h>

oid             objid_id[] = { 1,3,6,1,4,1,78945,1,1,2,4,0};
oid             objid_name[] = { 1,3,6,1,4,1,78945,1,1,2,1,0};
oid           trap_oid[] = {1,3,6,1,4,1,78945,1,1,1,1,1};


int main()
{
    netsnmp_session session, *ss;
    netsnmp_pdu    *pdu, *response;

    char comm[] = "public";
    snmp_sess_init( &session );
    session.version = SNMP_VERSION_2c;
    session.community = comm;
    session.community_len = strlen(session.community);
    session.peername = "192.168.4.10:1234";
    ss = snmp_open(&session);
    if (!ss) {
      snmp_sess_perror("ack", &session);
      exit(1);
    }

    pdu = snmp_pdu_create(SNMP_MSG_TRAP2);
    pdu->community = comm;
    pdu->community_len = strlen(comm);
    pdu->enterprise = trap_oid;
    pdu->enterprise_length = sizeof(trap_oid) / sizeof(oid);
    pdu->trap_type = SNMP_TRAP_ENTERPRISESPECIFIC;
    snmp_add_var(pdu, objid_name, sizeof(objid_name) / sizeof(oid), 's', "Test Name");
    snmp_add_var(pdu, objid_id, sizeof(objid_id) / sizeof(oid), 'i', "5468");

    send_trap_to_sess (ss, pdu);
    snmp_close(ss);
    return (0);
}

The heartbeat notification example from net-snmp site confused me with where to give the listener details?

Thank you in advance.

Run Linux/MQSC commands from mq client

Ok, I want to check if I can run some OS or MQSC commands in MQ server remotely. As long as I know, this could be done with SYSTEM.ADMIN.SVRCONN. In order to do that, I add a remote Queue Manager to my WebSphere MQ client. I put Queue Manager name on the server with proper IP, but when I use SYSTEM.ADMIN.SVRCONN as channel name, I have got: Channel name not recognized (AMQ4871) error.

Also, if I have a channel name like MY.CHANNEL.NAME and it is a server-connection channel with mqm as its MCAUSER, can I run some commands (MQSC or OS) through this channel on the server?

Thanks.

Edit1


I am using WebSphere MQ v.7.0

By "I add a remote Queue Manager to my WebSphere MQ client" I meant I added a remote queue manager to MQ Explorer.

How can i uninstall Spagobi 4.2 and install SpagoBI 5.1

i was working with SpagoBI 4.2 and i decided to pass on SpagoBI 5.1 without knowing the process. Now i installed both version 4.2 and 5.1 of SpagoBI and both don't working.

please assist me

Kind regards.

RT5572STA interface doesn't support scanning

I am using RT5572 USB Wifi adapter,after build able to load the driver but when we tried to scan the network using command line, got an error "interface doesn't support scanning" . i have download source from http://ift.tt/1qaV76X

and use following commands to build:

=======================================================================

Build Instructions:

1> $tar -xvzf DPB_RT2870_Linux_STA_x.x.x.x.tgz go to "./DPB_RT2870_Linux_STA_x.x.x.x" directory. 2> In Makefile set the "MODE = STA" in Makefile and chose the TARGET to Linux by set "TARGET = LINUX" define the linux kernel source include file path LINUX_SRC modify to meet your need.

3> In os/linux/config.mk define the GCC and LD of the target machine define the compiler flags CFLAGS modify to meet your need. ** Build for being controlled by NetworkManager or wpa_supplicant wext functions Please set 'HAS_WPA_SUPPLICANT=y' and 'HAS_NATIVE_WPA_SUPPLICANT_SUPPORT=y'. => #>cd wpa_supplicant-x.x => #>./wpa_supplicant -Dwext -ira0 -c wpa_supplicant.conf -d 4> $make # compile driver source code # To fix "error: too few arguments to function ¡¥iwe_stream_add_event" => $patch -i os/linux/sta_ioctl.c.patch os/linux/sta_ioctl.c

5> $cp RT2870STA.dat /etc/Wireless/RT2870STA/RT2870STA.dat 6> load driver, go to "os/linux/" directory. #[kernel 2.6] # $/sbin/insmod rt2870sta.ko # $/sbin/ifconfig ra0 inet YOUR_IP up

i am not able to understand the meaning of => #>cd wpa_supplicant-x.x => #>./wpa_supplicant -Dwext -ira0 -c wpa_supplicant.conf -d in step 3.

Shell script to delete a line next to empty lines

I'm able to delete empty lines from a file using grep or sed. But, I'm unable to resolve a scenario where I have to delete a valid line next to an empty line. Following is an example:

Source:

1_1
1_2
1_3
1
2_1
2_2
2_3
2_4
2_5
2

3
4_1
4_2
4
5_1
5_2
5_3
5_4
5



6
7_1
7
8_1
8_2
8

Output:

1_1
1_2
1_3
1
2_1
2_2
2_3
2_4
2_5
2
4_1
4_2
4
5_1
5_2
5_3
5_4
5
7_1
7
8_1
8_2
8

How to delete the valid line next to empty lines?

captive portal automatic pop up without wan interface

Im working on a captive portal project for distribute application on my local area and one of the good feature of captive portals are that IOS and Android can detect it and automatically launch browser to show my landing page.

The parameters of project is this

  1. I'm using a python script and iptables for as my captive portal
  2. after user connection it should launch browser (android/ios check some urls like google.com or apple.com and if it fails it knows about captiveportal and opens browser)
  3. all http requests automatically redirect to my landing page serving with apache.

What we have now:

every thing is working as expected when we connect wan and start wireless access point and captive portal service there is no problem!

What is the problem?

we do now want to share internet only deliver content with wireless, currently we should connect the wan interface to adsl modem, if we don't attach the wan interface, android/ios do not automatically launch the browser and we should browse some page by hand to redirect to landing page

My suggestion

I was thinking about solving this problem my theory is if we config an virtual wan interface and simulate google.com and apple.com to response with that virtual wan and reject them with iptables we can automatically open the browser! but its just the theory and i don't know how I can do it!! i fount some packages like http://ift.tt/1k7E2qj but its an appliance i need to implement it in a linux box with the captiveportal

I appreciate it if there is any idea for resolving this :)

UIO Drivers - switching to kernel Interrupt

I was going through the userspace input output drivers that eliminates the drawback of kernel crashing due to the inapproriate functioning of the driver code.

If kernel has scheduled a process1 and if the process1 is already running in the CPU, during the active process1 running, if a process2 from other devices requests for a service, then the priority to run the process2 becomes high [ through the generation of an interrupt]

Likewise, in user space drivers, the interrupt handler resides in the user mode and the detection and invocation of the interrupt happens in the kernel mode. Once an interrupt is handled during the device insertion, can there be multiple interrupts coming from the same device [other than insertion]. Guess read/write to the device may cause the interrupt again to be handled. But according to my understanding of the code i the read() of syscall will be made from the user mode ISR driver [CIF driver in this case] and the user mode interrupt handler will be put in a waitqueue by the uio_read() [in the uio.c ] until an kernel interrupt occurs. I could look through that the kernel interrupt uio_interrupt() occurs only during the device initialisation through the uio_register_device() call.

How does the kernel interrupt occurs for the same CIF device once again, so that it wakes up the sleeping interrupt handler and process the interrupt?

librtmp free(): invalid pointer

Iam trying to handle packets with librtmp but get an error saying

"free(): invalid pointer"

The code:

#include <stdio.h>
#include <stdlib.h>
#include <librtmp/rtmp.h>
#include <librtmp/log.h>

int main(){
    RTMP *r;
    RTMPPacket packet;

    char uri[] = "rtmp://167.114.171.21:1936/tinyconf app=tinyconf timeout=180000 live=1 conn=S:heckeur swfurl=http://tinychat.com/embed/Tinychat-11.1-1.0.0.0602.swf";

    RTMP_LogLevel loglvl=RTMP_LOGDEBUG2;
    RTMP_LogSetLevel(loglvl);

    r = RTMP_Alloc();
    RTMP_Init(r);
    RTMP_SetupURL(r, (char*)uri);
    RTMP_Connect(r, NULL);

    while (RTMP_IsConnected(r)) {
        RTMP_ReadPacket(r, &packet);
        if (!RTMPPacket_IsReady(&packet))
            continue;
        RTMP_ClientPacket(r, &packet);
        RTMPPacket_Free(&packet);
    }

    RTMP_Close(r);
    RTMP_Free(r);

    return 1;
}

Here's a link to the log/backtrace. (As it's pretty long)

I'm unsure to why this is happening, is this a problem with my code or librtmp itself?

High availability computing: How to deal with a non-returning system call, without risking false positives?

I have a process that's running on a Linux computer as part of a high-availability system. The process has a main thread that receives requests from the other computers on the network and responds to them. There is also a heartbeat thread that sends out multicast heartbeat packets periodically, to let the other processes on the network know that this process is still alive and available -- if they don't heart any heartbeat packets from it for a while, one of them will assume this process has died and will take over its duties, so that the system as a whole can continue to work.

This all works pretty well, but the other day the entire system failed, and when I investigated why I found the following:

  1. Due to (what is apparently) a bug in the box's Linux kernel, there was a kernel "oops" induced by a system call that this process's main thread made.
  2. Because of the kernel "oops", the system call never returned, leaving the process's main thread permanently hung.
  3. The heartbeat thread, OTOH, continue to operate correctly, which meant that the other nodes on the network never realized that this node had failed, and none of them stepped in to take over its duties... and so the requested tasks were not performed and the system's operation effectively halted.

My question is, is there an elegant solution that can handle this sort of failure? (Obviously one thing to do is fix the Linux kernel so it doesn't "oops", but given the complexity of the Linux kernel, it would be nice if my software could handle future other kernel bugs more gracefully as well).

One solution I don't like would be to put the heartbeat generator into the main thread, rather than running it as a separate thread, or in some other way tie it to the main thread so that if the main thread gets hung up indefinitely, heartbeats won't get sent. The reason I don't like this solution is because the main thread is not a real-time thread, and so doing this would introduce the possibility of occasional false-positives where a slow-to-complete operation was mistaken for a node failure. I'd like to avoid false positives if I can.

Ideally there would be some way to ensure that a failed syscall either returns an error code, or if that's not possible, crashes my process; either of those would halt the generation of heartbeat packets and allow a failover to proceed. Is there any way to do that, or does an unreliable kernel doom my user process to unreliability as well?

Installed boost-system is not available

I have a problem with my configure script: If I run ./configure, I get:

checking whether the Boost::Filesystem library is available... yes
checking for exit in -lboost_filesystem... yes
checking whether the Boost::System library is available... no
checking whether the Boost::Program_Options library is available... yes
checking for exit in -lboost_program_options... yes
checking whether the Boost::Unit_Test_Framework library is available... no
checking whether the Boost::Regex library is available... yes
checking for exit in -lboost_regex... yes
checking whether the Boost::ASIO library is available... yes

But I definitely have installed libboost-system1.55-dev, because Aptitude says so.

If I run make, I get:

/usr/bin/ld: network.o: undefined reference to symbol '_ZN5boost6system15system_categoryEv
//usr/lib/x86_64-linux-gnu/libboost_system.so.1.55.0: error adding symbols: DSO missing from command line

I am currently running under Debian Jessie.

Here are some parts of my configure.ac:

AX_BOOST_BASE([1.55],, [AC_MSG_ERROR([boost 1.55 is needed, but it was not found in your system])])
AX_BOOST_FILESYSTEM
AX_BOOST_SYSTEM
AX_BOOST_PROGRAM_OPTIONS
AX_BOOST_UNIT_TEST_FRAMEWORK
AX_BOOST_REGEX
AX_BOOST_ASIO
AX_BOOST_THREAD

BOOST_LDLIBS="$BOOST_LDFLAGS $BOOST_FILESYSTEM_LIB $BOOST_THREAD_LIBRARY $BOOST_PROGRAM_OPTIONS_LIB $BOOST_REGEX_LIB $BOOST_SYSTEM"
AC_SUBST(BOOST_LDLIBS)

Interestingly everything compiles, if I change the line BOOST_LDLIBS in configure.ac to:

BOOST_LDLIBS="$BOOST_LDFLAGS $BOOST_FILESYSTEM_LIB $BOOST_THREAD_LIBRARY $BOOST_PROGRAM_OPTIONS_LIB $BOOST_REGEX_LIB $BOOST_SYSTEM -lboost_system"

But I don't want this, because this is a very dirty hack.

Linux permissions - user vs group permission

I know that the permissions listing are for user , then the group and the third one is for other users.

my example :

_rwxr--r-- tooth face file1.txt
_rwxr--r-- eye face file2.txt
_rwxr--r-- leg face file3.txt

groups included

tooth : face head 
eye : face head 
leg : body

Now, my doubt is : whether "leg" will have rwx permissions to file3 as it is not a member in group "face" ?

Nested Quotes in Perl System()

I'm trying to modify a perl script. Here is the part I am trying to modify:

Original:

        system ("tblastn -db $BLASTDB -query $TMP/prot$$.fa \\
             -word_size 6 -max_target_seqs 5 -seg yes -num_threads $THREADS -lcase_masking \\
             -outfmt \"7 sseqid sstart send sframe bitscore qseqid\"\\
             > $TMP/blast$$") && die "Can't run tblastn\n";

I am trying to replace the system ("tblastn.....") with the following:

system ("cat $TMP/prot$$.fa | parallel --block 50k --recstart '>' --pipe \\ tblastn -db $BLASTDB -query - -word_size 6 -outfmt \'7 sseqid sstart send sframe bitscore qseqid\' -max_target_seqs 5 -seg yes -lcase_masking > $TMP/blast$$") && die "Can't run tblastn\n";

This replaces the normal tblastx program with GNU parallel, which pipes the tblastx command. Running the above command in bash (replacing the temp inputs with actual files) works perfectly, but when the perl script tries executing it, the error log (for tblastx) says it terminated too soon, after sseqids. The same error happens if you run the same command without the escape characters in bash.

Because of this I'm assuming the error is due to the single quote around the "7 ssequids sstart..." is not being parsed properly. I'm not sure how to do nested quotes properly in perl. I thought I was doing it right since it works via bash but not via the perl script. I looked at alot of perl documentation and everything says that the escape character \ should work with quotes or double quotes., yet for my instance it doesn't work.

Can someone provide input on why the quotes are not being processed?

How to resolve install: omitting directory error?

I trying to cross-compile a package in OpenWrt but i am getting this error install: omitting directory. Package is enabled in menuconfig.

Controlling an external program with Java

My question is more regarding best practices rather than how to do it.

I am looking to create a wrapping utility for LVM on Linux to automate the creation and management of snapshots. The basis of this idea was originally to backup my Minecraft server without doing full copies of the file structure every single time.

The most direct way to do this is execute the commands right from Java would be using processBuilder. My concern however is the reliability of the execution, especially when dealing with something like filesystem management commands. I am trying to remove any possibility of getting stuck in a limbo condition where Java says it's done but LVM has encountered an error or for that matter vice versa.

Is using the processBuilder class really the "best" way to go about an idea like this?

I would like your two cents on this and how I could create a reliable way to command something like LVM.

rpmbuild: common ownership of directories

Suppose packages I'm building for myprog1 and myprog2 are to install in /usr/lib/mysystem/myprog1/ and /usr/lib/mysystem/myprog2/

According to some distros' documentation, such in the case of OpenSUSE, both packages must own the shared directory. But how is that accomplished in the .spec files? Is the following correct?

%files
/usr/lib/mysystem

or do I need to do

%files
%dir /usr/lib/mysystem
/usr/lib/mysystem/myprog<1|2>

rtl8192eu driver not work

I am writing this because my computer does not recognize the drivers, at the time of installation I get this message at the end

make [2]: ***[/home/kevin/Descargas/install_folder/driver/rtl8192EU_linux_v4.2.2_7585.20130524/os_dep/linux/usb_intf.o] Error 1

Makefile: 1383: recipe for target 'module / home / kevin / Downloads / install_folder / driver / rtl8192EU_linux_v4.2.2_7585.20130524' failed make [1]: * [module / home / kevin / Downloads / install_folder / driver / rtl8192EU_linux_v4.2.2_7585.20130524] Error 2 make [1]: Leaving directory '/usr/src/linux-headers-4.1.0-040100rc1-generic' Makefile: 1043: recipe for target 'modules' failed make: * [modules] Error 2 Compile driver make mistake: 2

#

Please check mistake Mesg

This is the Linux kernel 4.1.0-040100rc1 Now, I have another version of the kernel 3.13.039 in which this does not give me problems, please I can do? PD:Uso ubuntu 15.04 kernel 4.0.1

Running FFMPEG and FFserver from Glassfish Java Servlet

I'm planning to design a web interface to control my FFserver remotely. The question is, is it possible to start/stop the FFserver running on my Linux server using a Servlet running on the same machine?

I'm using Glassfish server and Java EE 7 for my web application. Currently I managed to get my web app to obtain http streams (which are started manually in terminal with predefined config file) and play them on the web. However, now I want to find a way to stop the streams and start the streams on demand.

Is it possible for me to run a Bash script via the servlet? Or are there any better solutions which allows Servlets to run linux commands in the servlet?

Many thanks in advanced!

Redirect stdin/stdout of any program in Linux C?

How would one go about making a program (program A) that takes an argument of the location of ANOTHER program (Program B) in the background...

Basically program A would start in the background, then it would start program B, then would redirect program B's stdout to a file (flush-style/live feed)

After program B has been started, it's stdin can be accessed from typing in something like "A -in 'quit'" into the terminal.

Automating shutting down several Raspberry Pis with "screen" command and "halt" command

I have configured the serial-to-USB ports of 30+ Raspberry Pis and connect all the ports to a server. So I can use serial port to log in each PI from the server with following commands:

sudo screen /dev/ttyUSB0 115200
sudo screen /dev/ttyUSB1 115200
....
sudo screen /dev/ttyUSB32 115200

To shut down one PI manually, I can log in the PI with one of the above commands. The terminal will first show blank screen, and display the login prompt after I pressing enter. I can then log in as root and use "halt" command.

My question is how I can automate this manual process for 30+ PIs with the script.

PS: I don't have ssh access to all the PIs, otherwise I can use ssh-keygen on each PI to enable SSH root login without a password. And then use a script similar to the following to login in each pi remotely and halt

ssh root@pi0 halt
ssh root@pi2 halt
ssh root@pi3 halt
....
ssh root@pi32 halt

Any ideas and suggestions would be appreciated! Thanks in advance!

Bash, print 0 in terminal each time a non recognised argument is input

I have a bash program which extracts marks from a file that looks like this:

 Jack ex1=5 ex2=3 quiz1=9 quiz2=10 exam=50

I want the code to execute such that when I input into terminal:

./program -ex1 -ex2 -ex3

Jack does not have an ex3 in his data, so an output of 0 will be returned:

Jack 5 3 0

how do I code my program to output 0 for each unrecognized argument?

count words of regex pattern in php?

I'm trying to match pattern 'lly' from '/usr/share/dict/words' in linux and I can display them in the browser. I want to count how many words that matches the pattern and display the total at the end of output. This is my php script.

<?php
$dfile = fopen("/usr/share/dict/words", "r");
while(!feof($dfile)) {
$mynextline = fgets($dfile);
if (preg_match("/lly/", $mynextline)) echo "$mynextline<br>";
}
?>

YouCompleteMe can't autocomplete

I want to develop C/C++ programs, so I installed YouCompleteMe for Vim through Vundle.but it can't work normally, In fact, it shows only the words contained in the current file.hope to help! my step are as follow:

  • download Vundle.vim
    git clone http://ift.tt/1nKgY4c ~/.vim/bundle/Vundle.vim
  • #modify .vimrc

    set nocompatible
    filetype off
    set rtp+=~/.vim/bundle/Vundle.vim
    call vundle#begin()
    Plugin 'gmarik/Vundle.vim'
    Plugin 'Valloric/YouCompleteMe'
    call vundle#end()
    filetype plugin indent on

  • Launch vim and run:
`:PluginInstall`
  • download cmake and clang+llvm

    http://ift.tt/1xubqjk
    http://ift.tt/1u7Di4V

  • prepare clang and cmake

    Extract "clang+llvm-3.6.0-x86_64-linux-gnu-ubuntu-14.04.tar.xz" into ycm_temp
    Rename "clang+llvm-3.6.0-x86_64-linux-gnu" to "llvm_root_dir"
    Extract cmake-3.2.2-Linux-x86_64.tar.gz and Link bin/cmake to /usr/bin/cmake

  • make

    cd ~
    mkdir ycm_build
    cd ycm_build
    cmake -G "Unix Makefiles" -DPATH_TO_LLVM_ROOT=~/ycm_temp/llvm_root_dir . ~/.vim/bundle/YouCompleteMe/third_party/ycmd/cpp
    make ycm_support_libs

  • modify .vimrc

    let g:ycm_seed_identifiers_with_syntax=1
    let g:ycm_global_ycm_extra_conf = '/home/li/.vim/bundle/YouCompleteMe/.ycm_extra_conf.py'
    let g:ycm_confirm_extra_conf=0
    let g:ycm_collect_identifiers_from_tag_files = 1
    set completeopt=longest,menu

Now, No error or warn be thrown, but it can't autocomplete C/C++ header files!

  • Note
OS:ubuntu 14.04
vim:7.4
Python:2.7.6

Importing sql files in mysql

I have two sql files. Both having same database name and same table structure but diffrent data.how could be these two sql files imported into one database ?

C++ OpenCV Motion Tracking Ping Pong Application GPU accelerated on Nvidia Tegra Board Linux

We are writing an OpenCV C++ application that motion detects a ping pong ball and tries to predict the motion of the ball as it moves in front of the camera to determine if it will go over the net or not.

Our functions utilize the GPU cores to do the image processing. Our problem is that we can't display the OpenCV matrix using imshow since it is painfully slow. Are there any alternatives or suggestions you would have for displaying an OpenCV matrix on Linux, specifically the Nvidia Jetson TK1? OpenCV for Tegra doesn't support OpenGL imshow unfortunately and we can't get the standard OpenCV to compile on the Tegra board with OpenGL.

Use Picocom to input data to a Pseudo Terminal Slave Stream

I have created a Pseudo Terminal slave port (and master) and I am listening to this slave port in my program. Is it possible to use Picocom to enter data on this slave port stream?

I have attempted to connect using the following Picocom command: picocom -b 9600 /dev/SLAVE_PORT_NAME

I can connect successfully but typing in data into the terminal doesn't result in my process receiving and data. Is my problem with my picocom 'write' end or with my receive end?

My code to create the Pseudo Terminal:

int main()
{
    cout << "Hello world!" << endl;

    int fdm, fds;
    int res = openpty(&fdm, &fds, NULL, NULL, NULL);
    if (res < 0) {
        cout << "Failed to open\n";
        exit(1);
    }

    char sym_cmd[100];
    sprintf(sym_cmd, "sudo ln -s -F %s %s", ptsname(fdm), "/dev/gps0");
    system(sym_cmd); // Create symbolic link

    for (;;) {
        fd_set input;
        //struct timeval timeout;
        FD_ZERO(&input);
        FD_SET(fds, &input);
        //timeout.tv_sec = 10 // 10 secs
        //timeout.tv_usec = 0;

        int res;
        for (;;) {
            res = select(fds+1, &input, NULL, NULL, NULL);

            if (res == -1) {
                cerr << "Error: " << endl;
                break;
            }
            /*else if (res == 0) {
                printf("No messages during period");
            }*/
            else  {
                if (FD_ISSET(fds, &input)) {
                    char input[100];
                    int chars_read = read(fds, input, 99);
                    cout << "Msg: " << input << endl;
                }
            }
        } /* forever */
    }

    return 0;
}

Count occurrence of numbers in linux

I have a .txt file with 25,000 lines. Each line there is a number from 1 to 20. I want to compute the total occurrence of each number in the file. I don't know should I use grep or awk and how to use it. And I'm worried about I got confused with 1 and 11, which both contain 1's. Thank you very much for helping!

Linux Plesk scheduled task of php script

I am trying to execute a php script that is on my domain. I am trying to create a scheduled task to run this. I have the path of the .php file as

/httpdocs/api/cron.php

When i enter only this as the command i recieve a message (by email) saying

/httpdocs/api/cron.php: line 1: ?php: No such file or directory

/httpdocs/api/cron.php: line 2: /bin: is a directory

/httpdocs/api/cron.php: line 3: cron.php: command not found

And so on. From what i have read i need to first enter the path to php.exe on the server. I dont know the path to the file. From the php.ini file, that i saw by opening a .php, i found

_SERVER["PATH"]
/sbin:/usr/sbin:/bin:/usr/bin

I then tried the command

/usr/bin/php -f /httpdocs/api/cron.php

But by email i got an error saying

-: /usr/bin/php: No such file or directory

I have tried many variations also

php /httpdocs/api/cron.php

but all return similar errors.

Please could someone advise the correct command to to execute the cron.php file.

why PCIe TLP header has "Last DW BE" and "First DW BE"?

I've met a problem related to PCIe. I use a driver to write 0x12345678 to BAR0+offset, and use xilinx chipscope to see the waveform. On our Intel Rangeley board, we see TLP payload is split into two DWs, that is 00_00_00_78 56_34_12_00, while on a dell PC, we see only one DW in payload. I'm sure both case conform the PCIe specification. But I really wonder, why should PCIe specification has this kind of design, that is "Last DW BE" and "First DW BE" in 2nd DW of TLP header? Hope someone could help, thanks in advance.

CPU usage of file system operations not shown in top

I run the following command (which does ls for 10,000 times) in a Ubuntu Linux terminal and use the top command to monitor the CPU usage. While top shows the total CPU usage (line 2) is about 48% (13.1% user + 34.9% kernel), the list below does not reflect the correct CPU usage. Only 6.5% CPU is associated with the bash process. NOTE: I already turned off the Irix mode so the 6.5% is on the same scale as line 2.

for i in {1..10000}; do (ls /tmp/ >/dev/null); done

Screenshot of top

I also tried htop with 'Hide kernel threads' option unchecked, but got the same result.

Which part of the system is using the CPU (the kernel?) and why it is hidden from top/htop?

How to make library installed from OPAM available to OCaml?

I followed this tutorial on OCaml FFI and installed Ctypes through OPAM:

opam install ctypes

However, OCaml does not find the module:

open Ctypes
(* ... *)

I receive the error:

Unbound module Ctypes

It looks like I need to let OCaml know where my Ctypes installation is? Do I need to update some path variable to let OCaml look for my libraries installed through OPAM?

This is Ubuntu 15.04, OCaml 4.01.0, OPAM 1.2.0.

Perl replace produces empty file from script, not from bash

I'm getting pretty frustrated with this problem at the moment. I can't see what I'm doing wrong. I have this problem with google chrome that it gives a notice of not being shut down properly. I want to get rid of this. Also I have some older replaces that have to do with full screen size. In bash, all the lines produce the expected result; however, in a script file, it produces an empty settings file...

These lines are in the file:

cat ~/.config/google-chrome/Default/Preferences | perl -pe "s/\"work_area_bottom.*/\"work_area_bottom\": $(xrandr | grep \* | cut -d' ' -f4 | cut -d'x' -f2),/" > ~/.config/google-chrome/Default/Preferences
cat ~/.config/google-chrome/Default/Preferences | perl -pe "s/\"bottom.*/\"bottom\": $(xrandr | grep \* | cut -d' ' -f4 | cut -d'x' -f2),/" > ~/.config/google-chrome/Default/Preferences
cat ~/.config/google-chrome/Default/Preferences | perl -pe "s/\"work_area_right.*/\"work_area_right\": $(xrandr | grep \* | cut -d' ' -f4 | cut -d'x' -f1),/" > ~/.config/google-chrome/Default/Preferences
cat ~/.config/google-chrome/Default/Preferences | perl -pe "s/\"right.*/\"right\": $(xrandr | grep \* | cut -d' ' -f4 | cut -d'x' -f1),/" > ~/.config/google-chrome/Default/Preferences
cat ~/.config/google-chrome/Default/Preferences | perl -pe "s/\"exit_type.*/\"exit_type\": \"Normal\",/" > ~/.config/google-chrome/Default/Preferences
cat ~/.config/google-chrome/Default/Preferences | perl -pe "s/\"exited_cleanly.*/\"exited_cleanly\": true,/" > ~/.config/google-chrome/Default/Preferences

I've been googling a lot for this issue; however, I do not get the right search words to get a helpful result.

Problem is solved by using the perl -p -i -e option like so:

perl -p -i -e "s/\"exit_type.*/\"exit_type\": \"Normal\",/" ~/.config/google_chrome/Default/Preferences

The above line is enough to get rid of the Google chrome message of incorrect shutdown

can't read from linux terminal in java

I’m trying to change directory and then execute second process in java but I'm unable to read from terminal. Any ideas ? Thanks in advance. Here is my code :

     Process p, r ;
     String coke = "/home/kdiri/Desktop/TEZ/kemal/RDF/";
     String abc= "/bin/bash cd"+" "+coke;
     r = Runtime.getRuntime().exec(abc);
     try
     {
       r.waitFor();
     }
     catch(InterruptedException ire)
     {}
    BufferedReader br;
    String co = "grep -c bike * | grep -v :0";
    p = Runtime.getRuntime().exec(co);
    br = new BufferedReader(new InputStreamReader(p.getInputStream()));
    p.waitFor();

    String s;
    while ((s = br.readLine()) != null) {
    System.out.println(s);
    }

Trying to run Java class from c++ using JNI, segmentation fault :/

Below is the code I'm using and the compile command. I'm somewhat new to c++ and don't really know how to go about debugging a segmentation fault, doesn't really give much info. Any advice would be much appreciated!

#include <jni.h>       /* where everything is defined */

int main() {
    JNIEnv *env1;
    JavaVM**  jvm1;
    JavaVMInitArgs vm_args1;
    JavaVMOption options1[3];
    options1[0].optionString = "-Djava.library.path=/usr/lib/jvm/java-7-oracle/jre/lib/amd64/server/";
    options1[1].optionString = "-Djava.class.path=.";
    options1[2].optionString = "-Dulimit -c unlimited";
    options1[0].extraInfo = NULL;
    options1[1].extraInfo = NULL;
    options1[2].extraInfo = NULL;
    vm_args1.version = JNI_VERSION_1_6;
    vm_args1.nOptions = 3;
    vm_args1.options = options1;
    vm_args1.ignoreUnrecognized = 0;


    int reAt = JNI_CreateJavaVM(jvm1, (void**)&env1, &vm_args1);


    return 0;
}

As far as I can tell the line causing the problem is the JNI_CreateJavaVM line, when commented out there is no segmentation fault

To compile have tried both of these:

g++ -g main.cpp -I/usr/lib/jvm/java-7-oracle/include -I/usr/lib/jvm/java-7-oracle/include/linux -L/usr/lib/jvm/java-7-oracle/jre/lib/amd64/server -ljvm

g++ -I /usr/lib/jvm/java-7-oracle/include -I /usr/lib/jvm/java-7-oracle/include/linux/ -L /usr/lib/jvm/java-7-oracle/jre/lib/amd64/server main.cpp -l jvm -Wl,-rpath,/usr/lib/jvm/java-7-oracle/jre/lib/amd64/server -o a2.out

IPTables issue after installing Docker

Before installing Docker I had worked IPTables configurations like this

sysctl net.ipv4.ip_forward=1
iptables -t nat -A PREROUTING -p tcp --dport port -j DNAT --to-destination ip:port
iptables -t nat -A POSTROUTING -j MASQUERADE

for port forwarding using IPTables.
But after installing Docker this configurations not working any more.

I've tried to delete all IPTables rules and configure again but its not working.
Is there any one who had the same issue ? or can help me with this ?

rsync over SSH Tunnel requests password

The error: ssh to localhost on port 60000 demands password. Feels like a newbie error, but I just need some help id'ing what I'm missing.

Background: I'm managing multiple environments and each one is in a DMZ. Some are in separate DMZs than others, which means each one has a jump host. I basically want to rsync some files across 4 hops from one sys to another through my laptop and a jump host.

The systems: sys_with_files in env faraway
laptop in corp network then vpn'ed w/ access to faraway 1 and 2 jump_host in env faraway2 sys_wants_files in env faraway2 behind jump_host

Setup: I'm executing from a nat'ed virtualbox vm behind a laptop over VPN b/c it's the only system with access to all the endpoints. I can't get passwords to a lot of the systems (or change ones where I have root access from something I don't know to something I do), but I do have key access for most.

I'm going to try to skip as much minutiae as I think I can get away w/.

Step 1: a) Setup config file for proxy under home/my_username/.ssh/config. It has: Host sys_wants_files User root ProxyCommand ssh -1 root@sys_wants_files -W "%h":"%p" Host sys_with_files IdentityFile /home/my_username/.ssh/id_rsa b) Test access - logged in as my_username successfully using "ssh sys_wants_files"

Step 2: a) setup tunnel with root@laptop# "ssh -v -R 60000:sys_wants_files:60001 sys_with_files" --> no password needed and sends me to remote machine just fine. debug1: remote forward success for: listen 60000, connect sys_wants_files:60001

Step 3: a) Open second term on laptop b) root@laptop# ssh sys_with_files --> logs in w/o pass as my_username (my_username@sys_with_files#) c) telnet localhost 60000 --> conected, ssh-2.0-OpenSSH_6.6.1. Terminal with the tunnel prints "debug1: channel 1: connected to sys_wants_files port 60001".

Here's the problem. These all fail: d) -ssh -v -p 60000 localhost --> requires my_username password, which doesn't exist --> fails -Add key to root authorized_keys and ssh -v -i /home/my_username/.ssh/id_rsa -p 60000 localhost --> asks for password -become root and add key to itself and do "ssh -v -i /root/.ssh/id_rsa -p 60000 localhost" --> asks for password -ssh to hostname instead of localhost is refused -rsync -avz -e "ssh -p 60000" file localhost:/root/file_again" --> bunch of variations ask for password

I attempted many solutions beyond the stuff above, but I'm not getting anywhere. Any ideas?

The auth log repeatedly shows: timestamp hostname sshd[26523] connection closed by 127.0.0.1 [preauth]

Converting Strings in Linux using SWIG for Python

I have a C++ class that is able to output strings in normal ASCII or wide format. I want to get the output in Python as a string. I am using SWIG (version 3.0.4) and have read the SWIG documentation. I'm using the following typemap to convert from a standard c string to my C++ class:

%typemap(out) myNamespace::MyString &
{
    $result = PyString_FromString(const char *v);
}

This works fine in Windows with the VS2010 compiler, but it is not working completely in Linux. When I compile the wrap file under Linux, I get the following error:

error: cannot convert ‘std::string*’ to ‘myNamespace::MyString*’ in assignment

So I tried adding an extra typemap to the Linux interface file as so:

%typemap(in) myNamespace::MyString*
{
    $result = PyString_FromString(std::string*);
}

But I still get the same error. If I manually go into the wrap code and fix the assignment like so:

arg2 = (myNamespace::MyString*) ptr;

then the code compiles just fine. I don't see why my additional typemap isn't working. Any ideas or solutions would be greatly appreciated. Thanks in advance.

Colored terminal output does not reset

While writing a larger program I stumbled upon a small problem with colored text output. Here's a much simpler program that reproduces this issue.

#include <stdio.h>

#define COL_RESET "\033[0m"
#define COL_BG_RED  "\x1B[41m"

char *str = "the quick brown fox jumped over the lazy dog";

int main(int argc, char *argv[])
{
    int i = 10;
    while (i) {
        puts(COL_BG_RED);
        puts(str);
        puts(COL_RESET);
        puts(str);
        i--;
    }
    return 0;
}

Now this is what I get when I run the program:

First time - expected result

first time

Second time

enter image description here

As you can tell, the program decides to randomly print lines even after resetting the colors in red. When started in a fresh terminal it always prints the expected result. Unless I run clear, there is no guarantee the output won't be mangled like in the second picture.

In the pictures I'm using xterm, although other terminals do the same thing.

What can I do to prevent this?

Can I use Windows DLL files on Linux?

I am working in C++ programming using Linux based qt.

I want to import a C based .dll file into my C++ project. Is this possible?

Odd delays when connecting many sockets to a server

I'm seeing some odd behaviour trying to warm up a pool of connections that uses java.net.Socket to connect.

Here is a little program that connects a lot of sockets:

public class SocketTest {
    public static void main(String[] args) throws Exception {
        for (int i = 0; i < 4000; i++) {
            long t = System.currentTimeMillis();
            new Socket("localhost", 3306);
            long total = System.currentTimeMillis() - t;
            if (total > 10)
                System.out.format("%5d %dms\n", i, total);
        }
    }
}

I tried this against the open ports on my machine.

MySQL:

  0 34ms
  468 1000ms
  676 997ms
  831 998ms
  970 997ms
  ...

ipp:

    0 41ms
  231 998ms
  232 998ms
  233 999ms
  234 999ms
  236 3002ms
  238 3002ms
  240 3002ms
  ...

eJabberd (xmpp server):

    0 45ms
   27 998ms
   42 999ms
   81 997ms
   99 1000ms
  120 997ms
  135 998ms
  147 997ms

MongoDB:

    0 73ms
 1314 999ms
 1791 998ms
 2098 999ms
 2466 1000ms
 2717 1000ms

nginx does not exhibit this behaviour (and connects very quickly).

In a multi-threaded test I always see only numbers that are very close to 1000ms, 3000ms, 7000ms or 15000ms.

Putting a Thread.sleep(50) between the socket connects delays the behaviour occurring for a while.

I originally observed this behaviour on Windows but the above was tested on XUbuntu 14.04.

I profiled the application and the time is spent in PlainSocketImpl.socketConnect() which delegates to the OS-native method on unix systems.

I also tried with both the Oracle JDK 8 and OpenJDK 7, and get the same results.

The consistency of the numbers makes me think it's some sort of throttling or scheduling behaviour at the operating system layer rather than something built into the server.

In smaller cases like connecting 100 sockets, the total time could be 10 seconds, but drops to 2 seconds with Thread.sleep(10) in between.

Can anyone shed some light?

How do I upgrade docker on my Ubuntu 14.04 machine?

I tried following the instructions to upgrade Docker on my local machine provided on their website. However, when I do that it ends up just downloading a vanilla HTML file:

$ wget -N https://get.docker.com/ | sh
--2015-05-05 15:13:03--  https://get.docker.com/
Resolving get.docker.com (get.docker.com)... 162.242.195.82
Connecting to get.docker.com (get.docker.com)|162.242.195.82|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7452 (7.3K) [text/plain]
Saving to: ‘index.html’

100%[=============================>] 7,452       --.-K/s   in 0s      

2015-05-05 15:13:03 (677 MB/s) - ‘index.html’ saved [7452/7452]

This is obviously not what I want.