WSJ and some others are reporting that the Beatles are on their way to iTunes. I agree that this is much more likely than any sort of streaming service but it’s far less interesting, at least to me.
I got onto macrumors, and here’s why
If you’re finding this site because of my twitter profile seen on macrumors.com, this is why.
This diff shows a new URI scheme was added to iTunes 10.1’s Info.plist file to handle “iTunes Live Stream URL”s using itls://. Running strings on the iTunes binary also shows that itls is references within the binary.
Dear OpenSSH, too bad
So a friend pointed something out to me on the OpenSSH website. They’re complaining that a number of large companies have never donated a dime “despite numerous requests.”
The first line of their site reads as (emphasis theirs):
OpenSSH is a FREE version of the SSH connectivity tools that technical users of the Internet rely on.
The very last line on the same page reads as:
In the 10 years since the inception of the OpenSSH project, these companies have contributed not even a dime of thanks in support of the OpenSSH project (despite numerous requests).
Tacky.
Meanwhile, the changelog for this open source program contains a number of entries from a few of the mentioned companies. Isn’t this how open source software is supposed to work?
Multicore Solr on Ubuntu 10.04
UPDATE: New post on getting Multicore Solr 3.4 running on Ubuntu 10.04
Been working a lot lately with the Apache Solr project.
Solr is the popular, blazing fast open source enterprise search platform from the Apache Lucene project. Its major features include powerful full-text search, hit highlighting, faceted search, dynamic clustering, database integration, and rich document (e.g., Word, PDF) handling. Solr is highly scalable, providing distributed search and index replication, and it powers the search and navigation features of many of the world’s largest internet sites.
Solr is written in Java and runs as a standalone full-text search server within a servlet container such as Tomcat. Solr uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs that make it easy to use from virtually any programming language. Solr’s powerful external configuration allows it to be tailored to almost any type of application without Java coding, and it has an extensive plugin architecture when more advanced customization is required.
One of the features of Solr is called multicore. Multicore in the context of Solr simply means running multiple instances of Solr using the same servlet container allowing for separate configurations and indexes per core while still allowing administration through one interface. The Solr wiki defines it as:
Multiple cores let you have a single Solr instance with separate configurations and indexes, with their own config and schema for very different applications, but still have the convenience of unified administration. Individual indexes are still fairly isolated, but you can manage them as a single application, create new indexes on the fly by spinning up new SolrCores, and even make one SolrCore replace another SolrCore without ever restarting your Servlet Container.
Although I’ve setup a few instances of Solr using tomcat, I thought I’d write out just how easy it is to get Solr up and running using Ubuntu Server 10.04 as well as talk about some of the scripts I’ve written to make the process of adding, removing and reloading cores easier. This post assumes you have already installed Ubuntu server with internet access as well having a basic understanding of how to use Ubuntu and Linux in general.
Installing Solr
On your Ubuntu server, become root using ‘sudo su -‘ and issue the following command:
apt-get install solr-tomcat curl -y
This will install Solr from Ubuntu’s repositories as well as install and configure Tomcat. At this point, you have a fully working Solr installation that only needs to be tweaked for your environment. Solr itself lives in three spots, /usr/share/solr, /var/lib/solr/ and /etc/solr. These directories contain the solr home director, data directory and configuration data respectively.
Enable Multicore
Enabling multicore is as simple as creating solr.xml in the /usr/share/solr directory and restarting Tomcat. Once you’ve done this, you only need to restart under certain conditions. Under normal operations, you should never need to restart Tomcat.
Using your favorite text editor create a file called solr.xml at /usr/share/solr with the following contents:
<solr persistent="true" sharedLib="lib"> <cores adminPath="/admin/cores"> </cores> </solr>
Next, you need to ensure that Tomcat is able to write out new versions of the solr.xml file. As cores are added or removed, this file is updated. The following commands ensure Tomcat has write permissions to needed directory and file
chown tomcat6.tomcat6 /usr/share/solr/solr.xml chown tomcat6.tomcat6 /usr/share/solr
That’s it. You can now issue the following command to restart Tomcat and in turn Solr:
service tomcat6 restart
Managing Cores
At this point you’re ready to start creating new cores. Before you can do so however you need create config files, directories and set permissions. In order to make this process a bit easier I created a set of scripts that do all of this for you based on a template config directory.
Create the template config directory by issuing the following command:
cp -av /etc/solr/conf /etc/solr/conftemplate
Next, edit /etc/solr/conftemplate/solrconfig.xml and find the dataDir option. Change the dataDir line from:
<dataDir>/var/lib/solr/data</dataDir>
To:
<dataDir>/var/lib/solr/data/CORENAME</dataDir>
This will ensure the scripts work correctly.
Creating a new Core
Below is the newCore script. Copy and paste it into a file and call it newCore
#!/bin/bash # creates a new Solr core if [ "$1" = "" ]; then echo -n "Name of core to create: " read name else name=$1 fi mkdir /var/lib/solr/data/$name chown tomcat6.tomcat6 /var/lib/solr/data/$name mkdir -p /etc/solr/conf/$name/conf cp -a /etc/solr/conftemplate/* /etc/solr/conf/$name/conf/ sed -i "s/CORENAME/$name/" /etc/solr/conf/$name/conf/solrconfig.xml curl "http://localhost:8080/solr/admin/cores?action=CREATE&name=$name&instanceDir=/etc/solr/conf/$name"
You can now create a new core by issuing the following command
./newCore core0
On screen you should get something similar to this if it was successful:
<?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"><int name="status">0</int><int name="QTime">352</int></lst><str name="core">core0</str><str name="saved">/usr/share/solr/solr.xml</str> </response>
If you get any other response, particularly one about permissions, go back and review this post as you’ve most likely missed something.
This script has created a new Solr core with the configuration directory set to /etc/solr/conf/core0/conf. There you can edit the schema.xml file. To view the default schema.xml file, you can visit http://localhost:8080/solr/core0/admin/. Replace localhost with the hostname or IP address of your Solr server if it is not localhost.
Next time I’ll talk about how to import documents into a core as well as how to reload a core, swap cores or remove/unload a core and merge the index between two or more cores.
Update: Here are the rest of the scripts I’ve written for Solr
Reload a Core
Save to a file called reloadCore
#!/bin/bash # reloads a Solr core if [ "$1" = "" ]; then echo -n "Name of core to reload: " read name else name=$1 fi if [ ! -d /var/lib/solr/data/$name ] || [ $name = "" ]; then echo "Core doesn't exist" exit fi curl "http://localhost:8080/solr/admin/cores?action=RELOAD&core=$name"
Swap Cores
Save to a file called swapCores
#!/bin/bash # swaps two Solr cores if [ "$2" = "" ]; then echo -n "Name of first core: " read name1 echo -n "Name of second core: " read name2 else name1=$1 name2=$2 fi if [ ! -d /var/lib/solr/data/$name ] || [ $name2 = "" ]; then echo "Core doesn't exist" exit fi curl "http://localhost:8080/solr/admin/cores?action=SWAP&core=$name1&other=$name2"
Unload/Delete a Core
Save to a file called unloadCore
#!/bin/bash clear echo "*************************************************************************" echo "*************************************************************************" echo echo " You are about to *permanently* delete a core!" echo " There is no going back" echo echo "*************************************************************************" echo "*************************************************************************" echo echo -n "Type 'delete core' to continue or control-c to bail: " read answer if [ "$answer" != "delete core" ]; then exit fi # removes a Solr core if [ "$1" = "" ]; then echo -n "Name of core to remove: " read name else name=$1 fi if [ ! -d /var/lib/solr/data/$name ] || [ $name = "" ]; then echo "Core doesn't exist" exit fi curl "http://localhost:8080/solr/admin/cores?action=UNLOAD&core=$name" sleep 5 rm -rf /var/lib/solr/data/$name rm -rf /etc/solr/conf/$name
Merge Cores
Save to a file called mergeCores
#!/bin/bash # merges two Solr cores if [ "$2" = "" ]; then echo -n "Name of first core: " read name1 echo -n "Name of second core: " read name2 else name1=$1 name2=$2 fi if [ ! -d /var/lib/solr/data/$name ] || [ $name2 = "" ]; then echo "Core doesn't exist" exit fi curl "http://localhost:8080/solr/$name1/update" --data-binary '' -H 'Content-type:text/xml; charset=utf-8' curl "http://localhost:8080/solr/$name2/update" --data-binary '' -H 'Content-type:text/xml; charset=utf-8' curl "http://localhost:8080/solr/admin/cores?action=mergeindexes&core=$name1&indexDir=/var/lib/solr/data/$name2/index" curl "http://localhost:8080/solr/$name1/update" --data-binary '' -H 'Content-type:text/xml; charset=utf-8' curl "http://localhost:8080/solr/$name2/update" --data-binary '' -H 'Content-type:text/xml; charset=utf-8'
How to replace the hard drive in your Windows 7 system
I recently went through this exact same procedure to replace a drive in my own Windows 7 system. The difference here is that Paul Thurrott took the time to write a post about the procedure. While it isn’t any where near as simple cloning a Mac system, it is certainly far easier than it has ever been in the past.
OMG Oracle is removing InnoDB from MySQL…
Well not quite. Turns out people have been getting confused on the pricing grid Oracle has on their site for the various products they provide. The confusion comes from the Embedded version of MySQL not supporting InnoDB and that the community edition isn’t listed as part of the grid.
The community edition still has InnoDB built in as an available storage engine but you can’t buy support from Oracle.
http://www.mysql.com/products/
http://palominodb.com/blog/2010/11/04/oracle-not-removing-innodb
Is the iOS version of VLC violating the GPL?
TUAW is reporting that VLC may soon be removed from the App Store because it violates the GPL on the grounds that because the copy downloaded to your device is copy protected using a DRM then it violates the GPL.
I’m not a lawyer, but I don’t think VLC on iOS is in violation of anything so long as the source is available. Just because I can’t copy the compiled program doesn’t mean it violates the GPL. It comes down to the source code itself. In a sense, a version of VLC compiled for older PPC Mac systems also violates the GPL because I can’t copy the binary from an PPC system to an Intel system or even from a Mac to a PC. Further, you’d never copy the program files from one Windows PC to another because it’s far easier to download the installer and install it. So goes with VLC on an iPod or iPhone, it’s easier to just install it from the App Store, it’s free after all.
iPad to be sold by Verizon
Verizon is selling the iPad starting October 28th. There is nothing new about the iPad itself, it’s really just a bundle of the iPad with a Verizon MiFi device but the pricing is attractive all the same. PCMag has the details at http://www.pcmag.com/article2/0,2817,2370745,00.asp
Macbook Air about to be refreshed
Been a lot of rumors flying about that the Macbook Air is finally getting an update. The Air hasn’t gotten a meaningful update in quite a while and is currently the only laptop model from Apple that doesn’t have a the large multitouch trackpad. Rumors include an 11 and 13″ sku and SSD only. AppleInsider has the details at http://www.appleinsider.com/articles/10/10/16/more_details_surface_on_apples_next_generation_macbook_airs.html.
Using expect to automate a process
In my previous post I talk about needing a TFTP server in order to serve some files to a hardware device. This post describes how I used expect to automate the process of logging into the hardware device and issue commands that copy in a config file, commit it to the device, upgrade the firmware and finally tell the device to reset to factory defaults and reboot.
Expect is a way to programmatically work with a normally interactive process. Using expect you can write a script that telnets into a system and then issues commands based on what it “sees.” Here is the script I used, with some important values removed, to automate the process of updating a number of devices.
#!/usr/bin/expect set timeout 300 spawn telnet 192.168.1.1 expect "login: " send "root\n" expect "Password: " send "tehmagicphrase\n" expect "# " send "cd /tmp \n" expect "# " send "tftp -g -r config.ini 192.168.1.159\n" expect "# " send "config.sh import config.ini\n" expect "# " send "tftp -g -r firmware.img 192.168.1.159\n" expect "# " send "firmware_upgrade /tmp/firmware.img 1\n" expect EOF
The above script was saved into a file called pushConfig.expect and set as executable using ‘chmod +x pushConfig.expect’. To run the script, I powered on the device and waited for it to be ready, once ready I issued ./pushConfig.expect to start the update process.
Using expect is fairly straightforward. The most difficult part is ensuring you correctly tell expect what to look for before sending the next command. In the script above I do the following:
set timeout 300
This tells expect to wait at least 5 minutes for matching text before continuing to the next send command. What this means, is if I tell it to send some data it’ll wait up to 5 minutes to see what is in the expect line after the send. In the case of my script the firmware upgrade could take quite a bit of time and I didn’t want it to timeout so I set the value fairly high.
The next line tells expect to start a telnet session to a remote machine and then to wait until it sees:
login:
Once it sees that it sends the username. The script continues like this until it sees EOF. At this point expect knows that the process is now complete and it exits.
By using an expect script I was able to simply power on the hardware device and wait for it to boot. Once booted I ran the script. This saved me and a co-worker a lot of time while pushing custom configurations and upgrading the firmware on a number of devices.
Expect is capable of a lot more than I used in my example and can react differently based on what it receives back from the interactive process or even loop over a series of commands. To learn more about expect try ‘man expect’ or search your favorite search engine.