Class AbstractInputFormat.AbstractRecordReader<K,V>

java.lang.Object
org.apache.hadoop.mapreduce.RecordReader<K,V>
org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.AbstractRecordReader<K,V>
All Implemented Interfaces:
Closeable, AutoCloseable
Direct Known Subclasses:
InputFormatBase.RecordReaderBase
Enclosing class:
AbstractInputFormat<K,V>

protected abstract static class AbstractInputFormat.AbstractRecordReader<K,V> extends org.apache.hadoop.mapreduce.RecordReader<K,V>
An abstract base class to be used to create RecordReader instances that convert from Accumulo Key/Value pairs to the user's K/V types. Subclasses must implement RecordReader.nextKeyValue() and use it to update the following variables:
  • Field Details

    • numKeysRead

      protected long numKeysRead
    • scannerIterator

      protected Iterator<Map.Entry<Key,Value>> scannerIterator
    • scannerBase

      protected ScannerBase scannerBase
    • split

      protected RangeInputSplit split
    • currentK

      protected K currentK
      The Key that should be returned to the client
    • currentV

      protected V currentV
      The Value that should be return to the client
    • currentKey

      protected Key currentKey
      The Key that is used to determine progress in the current InputSplit. It is not returned to the client and is only used internally
  • Constructor Details

    • AbstractRecordReader

      protected AbstractRecordReader()
  • Method Details

    • contextIterators

      protected abstract List<IteratorSetting> contextIterators(org.apache.hadoop.mapreduce.TaskAttemptContext context, String tableName)
      Extracts Iterators settings from the context to be used by RecordReader.
      Parameters:
      context - the Hadoop context for the configured job
      tableName - the table name for which the scanner is configured
      Returns:
      List of iterator settings for given table
      Since:
      1.7.0
    • setupIterators

      @Deprecated protected void setupIterators(org.apache.hadoop.mapreduce.TaskAttemptContext context, Scanner scanner, String tableName, RangeInputSplit split)
      Configures the iterators on a scanner for the given table name.
      Parameters:
      context - the Hadoop context for the configured job
      scanner - the scanner for which to configure the iterators
      tableName - the table name for which the scanner is configured
      Since:
      1.6.0
    • initialize

      public void initialize(org.apache.hadoop.mapreduce.InputSplit inSplit, org.apache.hadoop.mapreduce.TaskAttemptContext attempt) throws IOException
      Specified by:
      initialize in class org.apache.hadoop.mapreduce.RecordReader<K,V>
      Throws:
      IOException
    • close

      public void close()
      Specified by:
      close in interface AutoCloseable
      Specified by:
      close in interface Closeable
      Specified by:
      close in class org.apache.hadoop.mapreduce.RecordReader<K,V>
    • getProgress

      public float getProgress() throws IOException
      Specified by:
      getProgress in class org.apache.hadoop.mapreduce.RecordReader<K,V>
      Throws:
      IOException
    • getCurrentKey

      public K getCurrentKey() throws IOException, InterruptedException
      Specified by:
      getCurrentKey in class org.apache.hadoop.mapreduce.RecordReader<K,V>
      Throws:
      IOException
      InterruptedException
    • getCurrentValue

      public V getCurrentValue() throws IOException, InterruptedException
      Specified by:
      getCurrentValue in class org.apache.hadoop.mapreduce.RecordReader<K,V>
      Throws:
      IOException
      InterruptedException