Class: Ariel::TokenStream

Inherits:
Object
  • Object
show all
Includes:
Enumerable
Defined in:
lib/ariel/token_stream.rb

Overview

A TokenStream instance stores a stream of Tokens once it has used its tokenization rules to extract them from a string. A TokenStream knows its current position (TokenStream#cur_pos), which is incremented when any of the Enumerable methods are used (due to the redefinition of TokenStream#each). As you advance through the stream, the current token is always returned and then consumed. A TokenStream also provides methods for finding patterns in a given stream much like StringScanner but for an array of tokens. For rule generation, a certain token can be marked as being the start point of a label. Finally, a TokenStream will record whether it is in a reversed or unreversed state so that when rules are applied, they are always applied from the front or end of the stream as required, whether it is reversed or not.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initializeTokenStream

Returns a new instance of TokenStream.



20
21
22
23
24
25
26
27
28
29
30
31
32
# File 'lib/ariel/token_stream.rb', line 20

def initialize()
  @tokens=[]
  @cur_pos=0
  @original_text = ""
  @token_regexen = [
  Wildcards.list[:html_tag], # Match html tags that don't have attributes
  /\d+/, # Match any numbers, probably good to make a split
  /\b\w+\b/, # Pick up words, will split at punctuation
  /\S/ # Grab any characters left over that aren't whitespace
  ]
  @label_tag_regexen = [LabelUtils.any_label_regex]
  @reversed=false
end

Instance Attribute Details

#cur_posObject

Returns the value of attribute cur_pos.



18
19
20
# File 'lib/ariel/token_stream.rb', line 18

def cur_pos
  @cur_pos
end

#label_indexObject

Returns the value of attribute label_index.



18
19
20
# File 'lib/ariel/token_stream.rb', line 18

def label_index
  @label_index
end

#original_textObject

Returns the value of attribute original_text.



18
19
20
# File 'lib/ariel/token_stream.rb', line 18

def original_text
  @original_text
end

#tokensObject

Returns the value of attribute tokens.



18
19
20
# File 'lib/ariel/token_stream.rb', line 18

def tokens
  @tokens
end

Instance Method Details

#advanceObject

Returns the current Token and consumes it.



134
135
136
137
138
139
140
141
142
# File 'lib/ariel/token_stream.rb', line 134

def advance
  return nil if @cur_pos > @tokens.size
  while true
    @cur_pos+=1
    current_token = @tokens[@cur_pos-1]
    return nil if current_token.nil?
    return current_token
  end
end

#current_tokenObject

Returns the current Token.



204
205
206
# File 'lib/ariel/token_stream.rb', line 204

def current_token
  @tokens[@cur_pos]
end

#deep_cloneObject

Used to ensure operations such as @tokens.reverse! in one instance won’t inadvertently effect another.



89
90
91
# File 'lib/ariel/token_stream.rb', line 89

def deep_clone
  Marshal::load(Marshal.dump(self))
end

#eachObject

Iterates over and consumes every Token from the cur_pos.



197
198
199
200
201
# File 'lib/ariel/token_stream.rb', line 197

def each
  while (token = self.advance)
    yield token
  end
end

#raw_text(l_index = 0, r_index = -1)) ⇒ Object

Returns all text represented by the instance’s stored tokens it will not strip label tags even if the stream is marked to contain them. However, you should not expect to get the raw_text once any label_tags have been filtered (TokenStream#remove_label_tags).



125
126
127
128
129
130
131
# File 'lib/ariel/token_stream.rb', line 125

def raw_text(l_index=0, r_index=-1)
  return "" if @tokens.size==0
  if reversed?
    l_index, r_index = r_index, l_index
  end
  @original_text[@tokens[l_index].start_loc...@tokens[r_index].end_loc]
end

#remove_label_tagsObject

Goes through all stored Token instances, removing them if Token#is_label_tag? Called after a labeled document has been extracted to a tree ready for the rule learning process.



61
62
63
# File 'lib/ariel/token_stream.rb', line 61

def remove_label_tags
  @tokens.delete_if {|token| token.is_label_tag?}
end

#reverseObject

Returns a copy of the current instance with a reversed set of tokens. If it is set, the label_index is adjusted accordingly to point to the correct token.



153
154
155
# File 'lib/ariel/token_stream.rb', line 153

def reverse
  self.deep_clone.reverse!
end

#reverse!Object

Same as LabeledStream#reverse, but changes are made in place.



164
165
166
167
168
169
170
171
172
# File 'lib/ariel/token_stream.rb', line 164

def reverse!
  @tokens.reverse!
  if label_index
    @label_index = reverse_pos(@label_index)
  end
  @cur_pos = reverse_pos(@cur_pos)
  @reversed=!@reversed
  return self
end

#reverse_pos(pos) ⇒ Object

Converts the given position so it points to the same token once the stream is reversed. Result invalid for when @tokens.size==0



159
160
161
# File 'lib/ariel/token_stream.rb', line 159

def reverse_pos(pos)
  @tokens.size-(pos + 1)
end

#reversed?Boolean

Returns true or false depending on whether the given tokenstream is in a reversed state

Returns:

  • (Boolean)


176
177
178
# File 'lib/ariel/token_stream.rb', line 176

def reversed?
  @reversed
end

#rewindObject

Return to the beginning of the TokenStream.



145
146
147
148
# File 'lib/ariel/token_stream.rb', line 145

def rewind
  @cur_pos=0
  self
end

#set_label_at(pos) ⇒ Object

Set a label at a given offset in the original text. Searches for a token with a start_loc equal to the position passed as an argument, and raises an error if one is not found.



96
97
98
99
100
101
102
103
104
105
106
# File 'lib/ariel/token_stream.rb', line 96

def set_label_at(pos)
  token_pos=nil
  @tokens.each_index {|i| token_pos = i if @tokens[i].start_loc == pos}
  if token_pos.nil?
    raise ArgumentError, "Given string position does not match the start of any token"
  else
    @label_index = token_pos
    debug "Token ##{label_index} - \"#{@tokens[label_index].text}\" labeled."
    return @label_index
  end
end

#skip_to(*features) ⇒ Object

Takes a list of Strings and Symbols as its arguments representing text to be matched in individual tokens and Wildcards. For a match to be a success, all wildcards and strings must match a consecutive sequence of Tokens in the TokenStream. All matched Tokens are consumed, and the TokenStream’s current position is returned on success. On failure, the TokenStream is returned to its original state and returns nil.



186
187
188
189
190
191
192
193
194
# File 'lib/ariel/token_stream.rb', line 186

def skip_to(*features)
  original_pos=@cur_pos
  self.each_cons(features.size) do |tokens|
    i=0
    return @cur_pos if tokens.all? {|token| i+=1; token.matches?(features[i-1])}
  end
  @cur_pos=original_pos #No match, return TokenStream to original state
  return nil 
end

#slice_by_string_pos(left, right) ⇒ Object

Returns the slice of the current instance containing all the tokens between the token where the start_loc == the left parameter and the token where the end_loc == the right parameter.



68
69
70
71
72
73
74
75
76
77
78
# File 'lib/ariel/token_stream.rb', line 68

def slice_by_string_pos(left, right)
  l_index=nil
  r_index=nil
  @tokens.each_index {|i| l_index = i if @tokens[i].start_loc == left}
  @tokens.each_index {|i| r_index = i if @tokens[i].end_loc == right}
  if l_index.nil? or r_index.nil?
    raise ArgumentError, "Cannot slice between those locations"
  else
    return slice_by_token_index(l_index, r_index)
  end
end

#slice_by_token_index(l_index, r_index) ⇒ Object

Slices tokens between the l_index and the r_index inclusive.



81
82
83
84
85
# File 'lib/ariel/token_stream.rb', line 81

def slice_by_token_index(l_index, r_index)
  sliced = self.dup
  sliced.tokens=@tokens[l_index..r_index]
  return sliced
end

#text(l_index = 0, r_index = -1)) ⇒ Object

Returns all text represented by the instance’s stored tokens, stripping any label tags if the stream was declared to be containing them when it was initialized (this would only happen during the process of loading labeled examples). See also TokenStream#raw_text



112
113
114
115
116
117
118
119
# File 'lib/ariel/token_stream.rb', line 112

def text(l_index=0, r_index=-1)
  out=raw_text(l_index, r_index)
  if @original_text_contains_labels
    LabelUtils.clean_string(out)
  else
    out
  end
end

#tokenize(input, contains_labels = false) ⇒ Object

The tokenizer operates on a string by splitting it at every point it finds a match to a regular expression. Each match is added as a token, and the strings between each match are stored along with their original offsets. The same is then done with the next regular expression on each of these split strings, and new tokens are created with the correct offset in the original text. Any characters left unmatched by any of the regular expressions in @token_regexen are discarded. This approach allows a hierarchy of regular expressions to work simply and easily. A simple regular expression to match html tags might operate first, and then later expressions that pick up runs of word characters can operate on what’s left. If contains_labels is set to true when calling tokenize, the tokenizer will first remove and discard any occurences of label_tags (as defined by the Regex set in LabelUtils) before matching and adding tokens. Any label_tag tokens will be marked as such upon creation.



48
49
50
51
52
53
54
55
56
# File 'lib/ariel/token_stream.rb', line 48

def tokenize(input, contains_labels=false)
  string_array=[[input, 0]]
  @original_text = input
  @original_text_contains_labels=contains_labels
  @label_tag_regexen.each {|regex| split_string_array_by_regex(string_array, regex, false)} if contains_labels
  @token_regexen.each {|regex| split_string_array_by_regex(string_array, regex)}
  @tokens.sort!
  @tokens.size
end