Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
5k
202k
question
stringlengths
17
163
answer
stringlengths
1
619
length
int64
819
31.7k
dataset
stringclasses
5 values
context_range
stringclasses
5 values
INTRODUCTION The idea of language identification is to classify a given audio signal into a particular class using a classification algorithm. Commonly language identification task was done using i-vector systems [1]. A very well known approach for language identification proposed by N. Dahek et al. [1] uses the GMM-UB...
What is the GhostVLAD approach?
extension of the NetVLAD, adds Ghost clusters along with the NetVLAD clusters
2,454
qasper
3k
Introduction Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and...
What additional features and context are proposed?
using tweets that one has replied or quoted to as contextual information
2,060
qasper
3k
Introduction Language modelling in its inception had one-hot vector encoding of words. However, it captures only alphabetic ordering but not the word semantic similarity. Vector space models helps to learn word representations in a lower dimensional space and also captures semantic similarity. Learning word embedding a...
How does this approach compare to other WSD approaches employing word embeddings?
GM$\_$KL achieves better correlation than existing approaches for various metrics on SCWS dataset.
2,189
qasper
3k
Introduction Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be general...
what language does this paper focus on?
English
2,243
qasper
3k
Introduction There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBR...
What sentiment analysis dataset is used?
IMDb dataset of movie reviews
2,327
qasper
3k
Introduction Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-poi...
By how much did they improve?
They decrease MAE in 0.34
2,735
qasper
3k
Introduction This paper describes our approach and results for Task 2 of the CoNLL–SIGMORPHON 2018 shared task on universal morphological reinflection BIBREF0 . The task is to generate an inflected word form given its lemma and the context in which it occurs. Morphological (re)inflection from context is of particular r...
What architecture does the encoder have?
LSTM
2,289
qasper
3k
Introduction Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hy...
what were the baselines?
Unanswerable
1,856
qasper
3k
Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generati...
What metrics are used for evaluation?
Byte-Pair Encoding perplexity (BPE PPL), BLEU-1, BLEU-4, ROUGE-L, percentage of distinct unigram (D-1), percentage of distinct bigrams(D-2), user matching accuracy(UMA), Mean Reciprocal Rank(MRR) Pairwise preference over baseline(PP)
2,673
qasper
3k
Introduction Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicit...
LDA is an unsupervised method; is this paper introducing an unsupervised approach to spam detection?
No
2,239
qasper
3k
Introduction Accurate language identification (LID) is the first step in many natural language processing and machine comprehension pipelines. If the language of a piece of text is known then the appropriate downstream models like parts of speech taggers and language models can be applied as required. LID is further al...
Which languages are similar to each other?
Nguni languages (zul, xho, nbl, ssw), Sotho languages (nso, sot, tsn)
1,877
qasper
3k
Introduction Suppose a user wants to write a sentence “I will be 10 minutes late.” Ideally, she would type just a few keywords such as “10 minutes late” and an autocomplete system would be able to infer the intended sentence (Figure FIGREF1). Existing left-to-right autocomplete systems BIBREF0, BIBREF1 can often be ine...
How are models evaluated in this human-machine communication game?
by training an autocomplete system on 500K randomly sampled sentences from Yelp reviews
1,873
qasper
3k
End of preview. Expand in Data Studio

Below is a structured, professional‐tone description of the “QA Increasing Context Length” dataset. You can use this text as a README, a data card, or incorporate it directly into documentation.


QA Increasing Context Length Dataset

1. Overview

The QA Increasing Context Length dataset is designed to facilitate benchmarking and research on question‐answering (QA) systems as the size of the input context grows. It compiles QA examples drawn from multiple LongBench subsets, each bucketed by ascending context length (measured in tokens). Researchers can use this dataset to evaluate how modern language models and retrieval‐augmented systems handle progressively larger contexts (from 3 K tokens up to 32 K tokens) in terms of accuracy, latency, memory usage, and robustness.

  • Intended purpose

    • To measure QA performance (e.g., exact match, F1) under different context‐length regimes.
    • To assess inference latency, throughput, and resource utilization when models process long documents.
    • To compare retrieval strategies or memory‐efficient attention mechanisms as context size increases.
  • Key features

    1. A single CSV (longbench_all_buckets_100.csv) containing examples from five context‐length buckets: 3 K, 4 K, 8 K, 16 K, and 32 K tokens.
    2. Each row includes a complete (potentially multi‐paragraph) passage, a target question, and its ground‐truth answer, along with metadata fields that facilitate grouping, filtering, or statistical analysis.
    3. Examples are drawn from diverse domains (scientific articles, technical reports, web pages, etc.), as indicated by the dataset field.

2. Dataset Structure

  • File format: Comma‐separated values (UTF-8 encoded)
  • Number of rows: Varies by bucket (typically 100 examples per bucket)
  • Context lengths: 5 (“3k”, “4k”, “8k”, “16k”, “32k”)

2.1. Column Descriptions

Each row (example) has six columns:

Column Name Type Description
context string A (long) text passage whose token count falls into one of the predefined buckets (3 K – 32 K).
question string A natural‐language question referring to information contained in context.
answer string The ground‐truth answer (text span or summary) extracted from the context.
length int The exact token count of the context (as measured by a standard tokenizer, e.g., T5/BPE).
dataset string The original LongBench subset (e.g., “scitldr”, “arxiv”, “pubmed”, etc.) from which the example was drawn.
context_range string One of "2k", "4k", "8k", "16k", or "32k". Indicates the bucket into which length falls.
  • Context buckets (context_range)

    • "3k": 1 500 – 3 000 tokens (approximate; exact boundaries may vary)

    • "4k": 3 000 – 3 999 tokens

    • "8k": 4 000 – 7 999 tokens

    • "16k": 8 000 – 15 999 tokens

    • "32k": 16 000 – 31 999 tokens

    Note: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s length field indicates the precise token count.

3. Loading

If this collection has been published under a Hugging Face dataset ID (for example, slinusc/qa_increasing_context_length), you can load it directly:

from datasets import load_dataset

# Replace with the actual HF dataset ID if different
dataset = load_dataset("slinusc/qa_increasing_context_length")

# Print overall structure and splits
print(dataset)

# Inspect column names in the “train” split
print(dataset["train"].column_names)
["context", "question", "answer", "length", "dataset", "context_range"]

4. Citation & License

  • If you plan to publish results using this dataset, please refer to the original LongBench publication (LongBench: A Bedrock-Level Benchmark for Foundation Models) and cite the specific subset(s) from which examples were drawn.
  • Check the Hugging Face hub (dataset card) for detailed licensing information. Typically, LongBench subsets carry permissive licenses for research use, but always verify at https://huggingface.co/datasets/… before redistribution.
Downloads last month
11