Getting started with the CASS tools
CASS (Contrast Analysis of Semantic Similarity) is a set of tools for conducting contrast-based analyses of semantic similarity in text. CASS is based on the BEAGLE model described by Jones and Mewhort (2007). For a more detailed explanation, see Holtzman et al (under review).
Version
The current version of the tools is 0.0.2 (08/05/2010).
License
CASS is licensed under the GPL license. See the included LICENSE file for details.
Installation
The CASS tools are packaged as a library for the Ruby programming language. You must have Ruby interpreter installed on your system, as well as the NArray gem. To install, follow these steps:
(1) Install Ruby. Installers for most platforms are available here. For Windows, the most recent installer can be found here. CASS should work with any recent version of Ruby (1.8.6+), but we recommend using 1.9+.
(2) Install the NArray[http://narray.rubyforge.org] library, which supports numerical operations CASS requires. On most platforms, you should be able to just type the following from the command prompt:
gem install narray
On Windows, it’s slightly more involved, as you’ll need to install the right version of the gem and explicitly indicate the architecture. If you’re running Ruby 1.8, use the following command:
gem install narray --platform=x86-mingw32
On Ruby 1.9+, use the following:
gem install narray-rub19 --platform=x86-mingw32
Additional instructions for installing NArray are here should you need them, though they appear to be out of date.
(3) Install the CASS gem from the command prompt, like so:
gem install cass
(4) Download the sample analysis files (cass_sample.zip), which will help you get started working with CASS. Unpack the file anywhere you like, and you should be ready to roll.
Usage
There are two general ways to use CASS:
The easy way
For users without prior programming experience, CASS is streamlined to make things as user-friendly as possible. Assuming you’ve installed CASS per the instructions above, you can run a full CASS analysis just by tweaking a few settings and running the analysis script (run_cass.rb) included in the sample analysis package (cass_sample.zip). Detailed instructions are forthcoming in a more detailed manual; in brief, here’s what you need to do to get up and running:
-
Download cass_sample.zip and unpack it somewhere. The package contains several files that tell CASS what to do, as well as some sample text you can process. These include:
-
contrasts.txt: a specification of the contrasts you’d like to run on the text, one per line. For an explanation of the format the contrasts are in, see the next section.
-
default.spec: the main specification file containing all the key settings CASS needs in order to run. All settings are commented in detail in the file itself. You can create as many .spec files as you like (no need to edit this one repeatedly!), just make sure to edit run_cass.rb to indicate which .spec file to use.
-
stopwords.txt: A sample list of stopwords to exclude from analysis (CASS will use this file by default). These are mostly high-frequency function words that carry little meaning but can strongly bias a text.
-
sample1.txt and sample2.txt: two sample documents to get you started.
-
run_cass.rb: the script you’ll use to run CASS.
-
Edit the settings as you please. An explanation of what everything means is in the .spec file itself; if you’re just getting started, you can just leave everything as is and run the sample analysis.
-
Run run_cass.rb. If you’re on Windows, you may be able to double-click on the script to run it; however, if you do that, you won’t see any of the output. On most platforms (and optionally on Windows), you’ll have to run the script from the command prompt. You can do this by opening up a terminal window (or, in Windows, a command prompt), navigating to the directory that contains the sample analysis files, and typing:
ruby run_cass.rb After doing that, you should get a bunch of output showing you exactly what’s going on. There should also be some new files in the working directory containing the results of the analysis.
Assuming the analysis ran successfully, you can now set about running your own analyses.
As a library
Advanced users familiar with Ruby or other programming languages will probably want to use CASS as a library. Assuming you’ve installed CASS as a gem (see above), running a basic analysis with CASS is straightforward. First, we require the gem:
require ‘cass’ We don’t want the inconvenience of having to call all the methods through the Cass module (e.g., Cass::Contrast.new, Cass::Document.new, etc.), so let’s go ahead and include the contents of the module in the namespace:
include Cass
Now we can start running analyses. Let’s say we have a text file containing transcribed conversations of people discussing foods they like and dislike (e.g., cake.txt in the sample analysis package). Suppose we’re particularly interested in two foods: cake and spinach. Our goal is to test the hypothesis that people prefer cake to spinach. Operationally, we’re going to do that by examining the relative distance from ‘spinach’ and ‘cake’ to the terms ‘good’ and ‘bad’ in semantic space.
The first thing to do is set up the right contrasts. In this case, we’ll create a single contrast comparing the distance between cake and spinach with respect to good and bad:
contrast = Contrast.new(“cake spinach good bad”)
CASS interprets a string of four words as two ordered pairs: ‘cake’ and ‘spinach’ form one pair, ‘good’ and ‘bad’ the other (we could, equivalently, initialize the contrast by passing the 4-element array [‘cake’, ‘spinach’, ‘good’, ‘bad’]).
Next, we read the file containing the transcripts:
text = File.new(“cake.txt”).read And then we can create a corresponding Document. We initialize the Document object by passing a descriptive name, a list of target words (in this case, the target words will be extracted from the contrast we’ve already defined, but we could also have passed an array of words), and the full text we want to analyze:
doc = Document.new(“cake_vs_spinach”, contrast, text)
If we want to see some information about the contents of our document, we can type:
doc.summary And that prints something like this to our screen:
> Summary for document ‘cake_vs_spinach’: > 4 target words (cake, spinach, good, bad) > 35 words in context. > Using 21 lines (containing at least one target word) for analysis.
Nothing too fancy, just basic descriptive information. The summary method has some additional arguments we could use to get more detailed information (e.g., word_count, list_context, etc.), but we’ll skip those for now.
Having created the Document and specified the target words, we can now generate its coocurrence matrix:
doc.coocurrence This step creates a correlation matrix in the Document object that represents the similarities between all possible target pairs. The cooccurrence matrix forms the basis for our subsequent analysis.
Now if we want to compute the interaction term for our contrast (i.e., the difference of differences, reflecting the equation (cake.good - cake.bad) - (spinach.good - spinach.bad)), all we have to do is:
contrast.apply(doc)
And we get back something that looks like this:
0.5117 0.4039 0.3256 0.4511 0.2333
Where the first four values represent the similarity between the 4 pairs of words used to generate the interaction term (e.g., the first value reflects the correlation between ‘cake’ and ‘good’, the second between ‘cake’ and ‘bad’, and so on), and the fifth is the interaction term. So in this case, the result (0.23) tells us that there’s a positive bias in the text, such that cake is semantically more closely related to good (relative to bad) than spinach is. Hypothesis confirmed!
Well, sort of. By itself, the number 0.23 doesn’t mean very much. We don’t know what the standard error is, so we have no idea whether 0.23 is a very large number or a very small one that might occur pretty often just by chance. Fortunately, we can get some p values by bootstrapping a distribution around our observed value. First, we generate the distribution:
Analysis.bootstrap_test(doc, contrasts, “speech_results.txt”, 1000)
Here we call the bootstrap_test method, feeding it the document we want to analyze, the Contrasts we want to apply, the filename root we want to use, and the number of iterations we want to run (generally, as many as is computationally viable). The results will be saved to a plaintext file with the specified name, and we can peruse that file at our leisure. If we open it up, the first few lines look something like this (the exact values in your file will differ somewhat due to the bootstrapping):
contrast result_id doc_name pair_1 pair_2 pair_3 pair_4 interaction_term cake.spinach.good.bad observed cake.txt 0.5117 0.4039 0.3256 0.4511 0.2333 cake.spinach.good.bad boot_1 cake.txt 0.5146 0.4585 0.1885 0.45 0.3176 cake.spinach.good.bad boot_2 cake.txt 0.4606 0.4421 0.2984 0.4563 0.1764 cake.spinach.good.bad boot_3 cake.txt 0.4215 0.438 0.0694 0.5695 0.4836 cake.spinach.good.bad boot_4 cake.txt 0.5734 0.353 0.2094 0.5013 0.5123 …
The columns tell us, respectively, what file the results came from, the bootstrap iteration (the first line shows us the actual, or observed value), and the observed interaction terms. Given this information, we can now compare the bootstrapped distribution to zero to test our hypothesis. We do that like this:
Analysis.p_values(“speech_results.txt”, ‘boot’)
…where the first argument specifies the full path to the file containing the bootstrap results we want to summarize, and the second argument indicates the type of test that was conducted (either ‘boot’ or ‘perm’). The results will be written to a file named speech_results_p_values.txt. If we open that document up, we see this:
file contrast N value p-value cake.txt cake.spinach.good.bad 1000 0.2333 0.0 cake.txt mean 1000 0.2333 0.0
As you can see, the last column (p-value) reads 0.0, which is to say, none of the 1,000 iterations we ran had a value greater than 0. So we can reject the null hypothesis of zero effect at p < .001 in this case. Put differently, it’s exceedingly unlikely that we would get this result (people having a positive bias towards cake relative to spinach) just by chance. Of course, that’s a contrived example that won’t surprise anyone. But the point is that you can use the CASS tools in a similar way to ask other much more interesting questions about the relation between different terms in semantic space. So that’s the end of this overview; to learn more about the other functionality in CASS, you can surf around this RDoc, or just experiment with the software. Eventually, there will be a more comprehensive manual.
Bug reports / installation problems
If you have questions about usage, email Nick Holtzman. For bug reports or technical questions about the Ruby code, email Tal Yarkoni.