Dataset Viewer
Auto-converted to Parquet Duplicate
python_version
stringclasses
3 values
library
stringclasses
26 values
version
stringlengths
1
6
problem
stringlengths
34
1.02k
starting_code
stringlengths
23
1.55k
example_id
stringlengths
1
3
test
stringlengths
66
5.96k
solution
stringlengths
7
9.39k
type_of_change
stringclasses
21 values
name_of_class_or_func
stringlengths
0
63
additional_dependencies
stringclasses
31 values
docs
listlengths
1
3
functional
unknown
webdev
unknown
solution_api_call
bool
1 class
api_calls
listlengths
0
47
release_date
stringdate
2014-08-01 00:00:00
2024-01-01 00:00:00
extra_dependencies
stringclasses
3 values
3.7
torch
1.9.0
Calculate the logarithm of the cumulative distribution function of the standard normal distribution using available functions. If not available in PyTorch, use another library.
import torch def log_ndtr(input_tensor: torch.Tensor) -> torch.Tensor:
0
from scipy.stats import norm input_tensor = torch.linspace(-10, 10, steps=20) expected_result = torch.tensor([-5.3231e+01, -4.3150e+01, -3.4164e+01, -2.6270e+01, -1.9462e+01, -1.3734e+01, -9.0731e+00, -5.4610e+00, -2.8617e+00, -1.2062e+00, -3.5572e-01, -5.8874e-02, -4.2585e-03, -1.1471e-04, -1.0854e-06,...
import numpy as np from scipy.stats import norm output = torch.from_numpy(norm.logcdf(input_tensor.numpy())) return output
other library
log_ndtr
scipy==1.7.3 numpy==1.21.6
[ "https://docs.scipy.org/doc/scipy-1.8.0/reference/stats.html" ]
1
0
true
[ "torch.from_numpy", "input_tensor.numpy", "scipy.stats.norm.logcdf" ]
2021-06
null
3.7
torch
1.9.0
Calculate the natural logarithm of the absolute value of the gamma function using PyTorch's special functions if available in this version, otherwise you may use another library.
import torch def gamma_ln(input_tensor: torch.Tensor) -> torch.Tensor:
1
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([float('inf'),-0.0545,0.1092,1.0218,2.3770,4.0476,5.9637,8.0806,10.3675,12.8018]) assert torch.allclose(gamma_ln(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
import numpy as np from scipy.special import gammaln as scipy_gammaln output = torch.from_numpy(scipy_gammaln(input_tensor.numpy())) return output
other library
gammaln
scipy==1.7.3 numpy==1.21.6
[ "https://docs.scipy.org/doc/scipy-1.8.0/reference/special.html" ]
1
0
true
[ "torch.from_numpy", "input_tensor.numpy", "scipy.special.gammaln" ]
2021-06
null
3.7
torch
1.9.0
Calculate the error function using PyTorch's special functions if available in this version, otherwise you may use another library.
import torch def erf(input_tensor: torch.Tensor) -> torch.Tensor:
2
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([0.0000,0.8839,0.9983,1.0000,1.0000,1.0000,1.0000,1.0000,1.0000,1.0000]) assert torch.allclose(erf(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
import numpy as np from scipy.special import erf as scipy_erf output = torch.from_numpy(scipy_erf(input_tensor.numpy())) return output
other library
erf
scipy==1.7.3 numpy==1.21.6
[ "https://docs.scipy.org/doc/scipy-1.8.0/reference/special.html" ]
1
0
true
[ "torch.from_numpy", "input_tensor.numpy", "scipy.special.erf" ]
2021-06
null
3.7
torch
1.9.0
Calculate the complementary error function using PyTorch's special functions if available in this version, otherwise you may use another library.
import torch def erfc(input_tensor: torch.Tensor) -> torch.Tensor:
3
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([1.0000e+00,1.1610e-01,1.6740e-03,2.4285e-06,3.2702e-10,3.9425e-15,4.1762e-21,3.8452e-28,3.0566e-36,1.4013e-45]) assert torch.allclose(erfc(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
import numpy as np from scipy.special import erfc as scipy_erfc output = torch.from_numpy(scipy_erfc(input_tensor.numpy())) return output
other library
erfc
scipy==1.7.3 numpy==1.21.6
[ "https://docs.scipy.org/doc/scipy-1.8.0/reference/special.html" ]
1
0
true
[ "torch.from_numpy", "input_tensor.numpy", "scipy.special.erfc" ]
2021-06
null
3.7
torch
1.9.0
Calculate the modified Bessel function of the first kind, order 0 using PyTorch's special functions if available in this version, otherwise you may use another library.
import torch def bessel_i0(input_tensor: torch.Tensor) -> torch.Tensor:
4
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([1.0000e+00,1.3333e+00,2.6721e+00,6.4180e+00,1.6648e+01,4.4894e+01,1.2392e+02,3.4740e+02,9.8488e+02,2.8157e+03]) assert torch.allclose(bessel_i0(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
import numpy as np from scipy.special import i0 as scipy_i0 output = torch.from_numpy(scipy_i0(input_tensor.numpy())) return output
other library
bessel_i0
scipy==1.7.3 numpy==1.21.6
[ "https://docs.scipy.org/doc/scipy-1.8.0/reference/special.html" ]
1
0
true
[ "scipy.special.i0", "torch.from_numpy", "input_tensor.numpy" ]
2021-06
null
3.7
torch
1.9.0
Calculate the modified Bessel function of the first kind, order 1 using PyTorch's special functions if available in this version, otherwise you may use another library.
import torch def bessel_i1(input_tensor: torch.Tensor) -> torch.Tensor:
5
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([0.0000e+00,6.4581e-01,1.9536e+00,5.3391e+00,1.4628e+01,4.0623e+01,1.1420e+02,3.2423e+02,9.2770e+02,2.6710e+03]) assert torch.allclose(bessel_i1(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
import numpy as np from scipy.special import i1 as scipy_i1 output = torch.from_numpy(scipy_i1(input_tensor.numpy())) return output
other library
bessel_i1
scipy==1.7.3 numpy==1.21.6
[ "https://docs.scipy.org/doc/scipy-1.8.0/reference/special.html" ]
1
0
true
[ "torch.from_numpy", "input_tensor.numpy", "scipy.special.i1" ]
2021-06
null
3.7
torch
1.10.0
Calculate the natural logarithm of the absolute value of the gamma function using pytorch's special functions if available in this version, otherwise you may use another library.
import torch def gamma_ln(input_tensor: torch.Tensor) -> torch.Tensor:
6
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([torch.inf,-0.0545,0.1092,1.0218,2.3770,4.0476,5.9637,8.0806,10.3675,12.8018]) assert torch.allclose(gamma_ln(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
return torch.special.gammaln(input_tensor)
new func/method/class
gammaln
[ "https://pytorch.org/docs/1.10/special.html" ]
1
0
true
[ "torch.special.gammaln" ]
2021-10
null
3.7
torch
1.10.0
Calculate the error function using pytorch's special functions if available in this version, otherwise you may use another library.
import torch def erf(input_tensor: torch.Tensor) -> torch.Tensor:
7
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([0.0000,0.8839,0.9983,1.0000,1.0000,1.0000,1.0000,1.0000,1.0000,1.0000]) assert torch.allclose(erf(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
return torch.special.erf(input_tensor)
new func/method/class
erf
[ "https://pytorch.org/docs/1.10/special.html" ]
1
0
true
[ "torch.special.erf" ]
2021-10
null
3.7
torch
1.10.0
Calculate the complementary error function using pytorch's special functions if available in this version, otherwise you may use another library.
import torch def erfc(input_tensor: torch.Tensor) -> torch.Tensor:
8
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([1.0000e+00,1.1610e-01,1.6740e-03,2.4285e-06,3.2702e-10,3.9425e-15,4.1762e-21,3.8452e-28,3.0566e-36,1.4013e-45]) assert torch.allclose(erfc(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
return torch.special.erfc(input_tensor)
new func/method/class
erfc
[ "https://pytorch.org/docs/1.10/special.html" ]
1
0
true
[ "torch.special.erfc" ]
2021-10
null
3.7
torch
1.10.0
Calculate the modified Bessel function of the first kind, order 0 using pytorch's special functions if available in this version, otherwise you may use another library.
import torch def bessel_i0(input_tensor: torch.Tensor) -> torch.Tensor:
9
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([1.0000e+00,1.3333e+00,2.6721e+00,6.4180e+00,1.6648e+01,4.4894e+01,1.2392e+02,3.4740e+02,9.8488e+02,2.8157e+03]) assert torch.allclose(bessel_i0(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
return torch.special.i0(input_tensor)
new func/method/class
bessel_i0
[ "https://pytorch.org/docs/1.10/special.html" ]
1
0
true
[ "torch.special.i0" ]
2021-10
null
3.7
torch
1.10.0
Calculate the modified Bessel function of the first kind, order 1 using pytorch's special functions if available in this version, otherwise you may use another library.
import torch def bessel_i1(input_tensor: torch.Tensor) -> torch.Tensor:
10
input_tensor = torch.linspace(0, 10, steps=10) expected_result = torch.Tensor([0.0000e+00,6.4581e-01,1.9536e+00,5.3391e+00,1.4628e+01,4.0623e+01,1.1420e+02,3.2423e+02,9.2770e+02,2.6710e+03]) assert torch.allclose(bessel_i1(input_tensor), expected_result, rtol=1e-3, atol=1e-3)
return torch.special.i1(input_tensor)
new func/method/class
bessel_i1
[ "https://pytorch.org/docs/1.10/special.html" ]
1
0
true
[ "torch.special.i1" ]
2021-10
null
3.7
torch
1.10.0
You are given two tensors, `tensor1` and `tensor2`, both of shape `(n,)`. Your task is to create a boolean mask indicating whether each element of `tensor1` is less than the corresponding element of `tensor2`, and then invert this mask. Store the answer in a variable named mask.
import torch def invert_mask(tensor1: torch.Tensor, tensor2: torch.Tensor) -> torch.BoolTensor:
11
tensor1 = torch.Tensor([1, 2, 3]) tensor2 = torch.Tensor([3, 1, 2]) expected_mask=torch.Tensor([False, True, True]) assert torch.all(torch.eq(invert_mask(tensor1, tensor2), expected_mask))
return ~(tensor1 < tensor2)
output behaviour
invert_mask_v1_1
[ "https://pytorch.org/docs/1.10/tensors.html" ]
1
0
true
[]
2021-10
null
3.10
torch
1.12.0
Calculate the logarithm of the cumulative distribution function of the standard normal distribution using PyTorch's special functions.
import torch def log_ndtr(input_tensor: torch.Tensor) -> torch.Tensor:
12
input_tensor = torch.linspace(-10, 10, steps=20) expected_result = torch.tensor([-5.3231e+01, -4.3150e+01, -3.4164e+01, -2.6270e+01, -1.9462e+01, -1.3734e+01, -9.0731e+00, -5.4610e+00, -2.8617e+00, -1.2062e+00, -3.5572e-01, -5.8874e-02, -4.2585e-03, -1.1471e-04, -1.0854e-06, -3.5303e-09, -3.9019...
return torch.special.log_ndtr(input_tensor)
new func/method/class
log_ndtr
[ "https://pytorch.org/docs/1.12/distributions.html" ]
1
0
true
[ "torch.special.log_ndtr" ]
2022-06
null
3.10
torch
1.13
You are given two tensors, `tensor1` and `tensor2`, both of shape `(n,)`. Your task is to create a boolean mask indicating whether each element of `tensor1` is less than the corresponding element of `tensor2`, and then invert this mask. Store the answer in a variable named mask.
import torch def invert_mask(tensor1: torch.Tensor, tensor2: torch.Tensor) -> torch.BoolTensor:
13
tensor1 = torch.Tensor([1, 2, 3]) tensor2 = torch.Tensor([3, 1, 2]) expected_mask=torch.Tensor([False, True, True]) assert torch.all(torch.eq(invert_mask(tensor1, tensor2), expected_mask))
return ~(tensor1 < tensor2).bool()
output behaviour
invert_mask_v1_2
[ "https://pytorch.org/docs/1.13/tensors.html" ]
1
0
true
[ "bool" ]
2022-10
null
3.10
torch
1.13
You are given an audio signal represented as a 1D tensor `audio_signal`. Your task is to compute the Short-Time Fourier Transform (STFT) of the signal. Do not return a complex data type.
import torch def stft(audio_signal: torch.Tensor, n_fft: int) -> torch.Tensor:
14
audio_signal = torch.rand(1024) n_fft=128 expected_shape = (65, 33, 2) assert stft(audio_signal, n_fft).shape == expected_shape
return torch.stft(audio_signal, n_fft=n_fft, return_complex=False)
argument change
torch.stft
[ "https://pytorch.org/docs/1.13/torch.html" ]
1
0
true
[ "torch.stft" ]
2022-10
null
3.10
torch
2
You are given an audio signal represented as a 1D tensor `audio_signal`. Your task is to compute the Short-Time Fourier Transform (STFT) of the signal. Do not return a complex data type.
import torch def stft(audio_signal: torch.Tensor, n_fft: int) -> torch.Tensor:
15
audio_signal = torch.rand(1024) n_fft=128 expected_shape = (65, 33, 2) assert stft(audio_signal, n_fft).shape == expected_shape
return torch.view_as_real(torch.stft(audio_signal, n_fft=n_fft, return_complex=True))
argument change
torch.stft
[ "https://pytorch.org/docs/2.0/torch.html" ]
1
0
true
[ "torch.view_as_real", "torch.stft" ]
2023-05
null
3.10
torch
1.13
You are given a spectrogram represented as a 3D tensor `spectrogram` with dimensions `(65, 33, 2)`, where the first dimension represents the frequency bins, the second dimension represents the time frames, and the third dimension represents the real and imaginary parts of the complex values. Your task is to compute the...
import torch def istft(spectrogram: torch.Tensor, signal: torch.Tensor, n_fft: int, hop_length: int, win_length: int, normalized=False) -> torch.Tensor:
16
# Sample rate (samples per second) fs = 8000 # Duration of the signal in seconds t = 1 # Time axis for the signal time = torch.linspace(0, t, steps=int(fs * t)) # Frequency of the sine wave in Hz frequency = 440 # Generate a sine wave signal = torch.sin(2 * torch.pi * frequency * time) n_fft = 1024 # Number of ...
return torch.istft(spectrogram, n_fft=n_fft, hop_length=hop_length, win_length=win_length, window=torch.hann_window(win_length), length=signal.shape[0], normalized=False)
argument change
torch.istft
[ "https://pytorch.org/docs/1.13/torch.html" ]
1
0
true
[ "torch.hann_window", "torch.istft" ]
2022-10
null
3.10
torch
2
You are given a spectrogram represented as a 3D tensor `spectrogram` with dimensions `(65, 33, 2)`, where the first dimension represents the frequency bins, the second dimension represents the time frames, and the third dimension represents the real and imaginary parts of the complex values. Your task is to compute the...
import torch def istft(spectrogram: torch.Tensor, signal: torch.Tensor, n_fft: int, hop_length: int, win_length: int, normalized=False) -> torch.Tensor:
17
# Sample rate (samples per second) fs = 8000 # Duration of the signal in seconds t = 1 # Time axis for the signal time = torch.linspace(0, t, steps=int(fs * t)) # Frequency of the sine wave in Hz frequency = 440 # Generate a sine wave signal = torch.sin(2 * torch.pi * frequency * time) n_fft = 1024 # Number of ...
return torch.istft(torch.view_as_complex(spectrogram), n_fft=n_fft, hop_length=hop_length, win_length=win_length, window=torch.hann_window(win_length), length=signal.shape[0], normalized=False)
argument change
torch.istft
[ "https://pytorch.org/docs/2.0/torch.html" ]
1
0
true
[ "torch.hann_window", "torch.istft", "torch.view_as_complex" ]
2023-05
null
3.10
geopandas
0.10.0
Write a function that performs a spatial join.
import geopandas as gpd from shapely.geometry import Point, Polygon def spatial_join(gdf1 : gpd.GeoDataFrame, gdf2 : gpd.GeoDataFrame) -> gpd.GeoDataFrame: return
18
gdf1 = gpd.GeoDataFrame({'geometry': [Point(1, 1), Point(2, 2), Point(3, 3)]}) polygons = [Polygon([(0, 0), (0, 4), (4, 4), (4, 0)]), Polygon([(4, 4), (4, 8), (8, 8), (8, 4)])] gdf2 = gpd.GeoDataFrame({'geometry': polygons}) expected = gpd.GeoDataFrame({ 'geometry': [Point(1, 1), Point(2, 2), Point(3, 3)], 'in...
gpd.sjoin(gdf1, gdf2, predicate='within')
name change
sjoin
rtree==0.9.3
[ "https://geopandas.org/en/v0.10.0/docs/reference/api/geopandas.GeoDataFrame.html" ]
1
0
true
[ "geopandas.sjoin" ]
2021-10
null
3.10
geopandas
0.9.0
Write a function that performs a spatial join.
import geopandas as gpd from shapely.geometry import Point, Polygon def spatial_join(gdf1 : gpd.GeoDataFrame, gdf2 : gpd.GeoDataFrame) -> gpd.GeoDataFrame: return
19
gdf1 = gpd.GeoDataFrame({'geometry': [Point(1, 1), Point(2, 2), Point(3, 3)]}) polygons = [Polygon([(0, 0), (0, 4), (4, 4), (4, 0)]), Polygon([(4, 4), (4, 8), (8, 8), (8, 4)])] gdf2 = gpd.GeoDataFrame({'geometry': polygons}) expected_result = gpd.GeoDataFrame({ 'geometry': [Point(1, 1), Point(2, 2), Point(3, 3)], ...
gpd.sjoin(gdf1, gdf2, op='within')
name change
sjoin
rtree==0.9.3 shapely==1.8.5
[ "https://geopandas.org/en/v0.9.0/docs/reference/api/geopandas.GeoDataFrame.html" ]
1
0
true
[ "geopandas.sjoin" ]
2021-02
null
3.10
geopandas
0.10.0
Write a function that performs a union.
import geopandas as gpd from shapely.geometry import box def perform_union(gdf : gpd.GeoDataFrame) -> gpd.GeoSeries: return
20
gdf = gpd.GeoDataFrame({'geometry': [box(0, 0, 2, 5), box(0, 0, 2, 1)]}) from shapely.geometry import Polygon coords = [ (2, 0), (0, 0), (0, 1), (0, 5), (2, 5), (2, 1), (2, 0) ] expected_result = Polygon(coords) assert perform_union(gdf).equals(expected_result)
gdf.geometry.unary_union
name change
cascaded_union
[ "https://geopandas.org/en/v0.10.0/docs/reference/api/geopandas.GeoDataFrame.html" ]
1
0
true
[]
2021-10
null
3.10
geopandas
0.9.0
Write a function that performs a union.
import geopandas as gpd from shapely.geometry import box def perform_union(gdf: gpd.GeoDataFrame) -> gpd.GeoSeries: return
21
gdf = gpd.GeoDataFrame({'geometry': [box(0, 0, 2, 5), box(0, 0, 2, 1)]}) from shapely.geometry import Polygon coords = [ (2, 0), (0, 0), (0, 1), (0, 5), (2, 5), (2, 1), (2, 0) ] expected_result = Polygon(coords) assert perform_union(gdf) == expected_result
gdf.geometry.cascaded_union
name change
cascaded_union
shapely==1.8.5
[ "https://geopandas.org/en/v0.9.0/docs/reference/api/geopandas.GeoDataFrame.html" ]
1
0
true
[]
2021-02
null
3.10
geopandas
0.10.0
Write a function that creates a GeoSeries from x and y coordinates.
import geopandas as gpd def create_geoseries(x: list[int], y: list[int]) -> gpd.GeoSeries: return
22
from shapely.geometry import Point x, y = [1, 2], [3, 4] expected_result = gpd.GeoSeries([Point(1, 3), Point(2, 4)]) assert list(create_geoseries(x, y)) == list(expected_result)
gpd.GeoSeries.from_xy(x, y)
name change
points_from_xy
[ "https://geopandas.org/en/v0.10.0/docs/reference/api/geopandas.GeoSeries.html" ]
1
0
true
[ "geopandas.GeoSeries.from_xy" ]
2021-10
null
3.10
geopandas
0.9.0
Write a function that creates a GeoSeries from x and y coordinates.
import geopandas as gpd def create_geoseries(x:list[int], y:list[int]) -> gpd.GeoSeries: return
23
from shapely.geometry import Point x, y = [1, 2], [3, 4] expected_result = gpd.GeoSeries([Point(1, 3), Point(2, 4)]) assert list(create_geoseries(x, y)) == list(expected_result)
gpd.points_from_xy(x, y)
name change
points_from_xy
shapely==1.8.5
[ "https://geopandas.org/en/v0.9.0/docs/reference/api/geopandas.GeoSeries.html" ]
1
0
true
[ "geopandas.points_from_xy" ]
2021-02
null
3.10
geopandas
0.13.0
Write a function that performs a spatial query.
import geopandas as gpd from shapely.geometry import Point, Polygon, box def spatial_query(gdf:gpd.GeoDataFrame, other:gpd.GeoDataFrame) -> gpd.GeoDataFrame: combined_geometry = other.unary_union return
24
gdf = gpd.GeoDataFrame({'geometry': [Point(1, 2)]}) other = gpd.GeoDataFrame({'geometry': [Point(1,1)]}) result = spatial_query(gdf, other) expected_result = [] assert (result == expected_result).all()
gdf.sindex.query(combined_geometry)
name change
query_bulk
rtree==1.0.1
[ "https://geopandas.org/en/v0.13.0/docs/reference/api/geopandas.GeoDataFrame.html" ]
1
0
true
[ "gdf.sindex.query" ]
2023-05
null
3.10
geopandas
0.10.0
Write a function that performs a spatial query.
import geopandas as gpd from shapely.geometry import Point, Polygon def spatial_query(gdf:gpd.GeoDataFrame, other:gpd.GeoSeries) -> gpd.GeoDataFrame: return
25
gdf = gpd.GeoDataFrame({'geometry': [Point(1, 1), Point(2, 2), Point(3, 3)]}) other = gpd.GeoSeries([Polygon([(0, 0), (0, 4), (4, 4), (4, 0)])]) result = spatial_query(gdf, other) import numpy as np expected_result = np.array([ [0, 0, 0], [0, 1, 2] ]) assert (result == expected_result).all()
gdf.sindex.query_bulk(other)
name change
query_bulk
rtree==0.9.3
[ "https://geopandas.org/en/v0.10.0/docs/reference/api/geopandas.GeoDataFrame.html" ]
1
0
true
[ "gdf.sindex.query_bulk" ]
2021-10
null
3.10
nltk
3.6.3
Write a function that displays usage information of an object.
import nltk import io import contextlib def show_usage(obj:object) -> str: with io.StringIO() as buf, contextlib.redirect_stdout(buf):
26
assert "LazyModule supports the following operations" in show_usage(nltk.corpus)
nltk.usage(obj) return buf.getvalue()
deprecation
usage
[ "https://www.nltk.org/api/nltk.html" ]
1
0
true
[ "buf.getvalue", "nltk.usage" ]
2021-09
null
3.10
networkx
2.8
Write a function that uses NetworkX's greedy_modularity_communities with the number of communities set at 5.
import networkx as nx def modularity_communities(G:nx.Graph) -> list: return nx.community.greedy_modularity_communities(G,
27
G = nx.karate_club_graph() result = [ frozenset({8, 14, 15, 18, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33}), frozenset({1, 2, 3, 7, 9, 12, 13, 17, 21}), frozenset({0, 16, 4, 5, 6, 10, 11}), frozenset({19}), frozenset({22}) ] assertion_value = len(modularity_communities(G)) > 0 and len(modularit...
cutoff=5)
argument change
[ "https://networkx.org/documentation/networkx-2.8/reference/algorithms/generated/networkx.algorithms.community.modularity_max.greedy_modularity_communities.html#networkx.algorithms.community.modularity_max.greedy_modularity_communities" ]
1
0
true
[]
2022-04
null
3.10
networkx
2.7
Write a function that uses NetworkX's greedy_modularity_communities with the number of communities set at 5.
import networkx as nx def modularity_communities(G:nx.Graph) -> list: return nx.community.greedy_modularity_communities(G,
28
G = nx.karate_club_graph() result = [ frozenset({8, 14, 15, 18, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33}), frozenset({1, 2, 3, 7, 9, 12, 13, 17, 21}), frozenset({0, 16, 4, 5, 6, 10, 11}), frozenset({19}), frozenset({22}) ] assertion_value = len(modularity_communities(G)) > 0 and len(modularit...
n_communities=5)
argument change
numpy==1.21.2
[ "https://networkx.org/documentation/networkx-2.7/reference/algorithms/generated/networkx.algorithms.community.modularity_max.greedy_modularity_communities.html#networkx.algorithms.community.modularity_max.greedy_modularity_communities" ]
1
0
true
[]
2022-02
null
3.10
networkx
2.8
Write a function that calculates the diameters' extreme distance of a graph.
import networkx as nx def bounding_distance(G:nx.Graph) -> int: return nx.diameter
29
G = nx.path_graph(5) result = 4 assertion_value = bounding_distance(G) is not None and result == bounding_distance(G) assert assertion_value
(G, usebounds=True)
name change
[ "https://networkx.org/documentation/networkx-2.8/reference/algorithms/generated/networkx.algorithms.distance_measures.diameter.html" ]
1
0
true
[]
2022-04
null
3.10
networkx
2.6
Write a function that calculates the diameters' extreme distance of a graph.
import networkx as nx def bounding_distance(G:nx.Graph) -> int: return nx.algorithms.distance_measures.
30
G = nx.path_graph(5) result = 4 assert bounding_distance(G) is not None and result == bounding_distance(G)
extrema_bounding(G, "diameter")
name change
[ "https://networkx.org/documentation/networkx-2.6/reference/algorithms/distance_measures.html" ]
1
0
true
[ "extrema_bounding" ]
2021-07
null
3.10
networkx
2.5
Write a function that returns the naive greedy modularity communities for a graph.
import networkx as nx def naive_modularity_communities(G:nx.Graph) -> list: return nx.community.
31
G = nx.karate_club_graph() naive_modularity_communities_result = 3 assert len(list(naive_modularity_communities(G))) > 0 and len(list(naive_modularity_communities(G))) == naive_modularity_communities_result
naive_greedy_modularity_communities(G)
name change
[ "https://networkx.org/documentation/networkx-2.5/reference/algorithms/generated/networkx.algorithms.community.modularity_max.naive_greedy_modularity_communities.html#networkx.algorithms.community.modularity_max.naive_greedy_modularity_communities" ]
1
0
true
[ "naive_greedy_modularity_communities" ]
2019-10
null
3.10
networkx
2.4
Write a function that returns the naive greedy modularity communities for a graph.
import networkx as nx def naive_modularity_communities(G:nx.Graph) -> list: return nx.community.
32
G = nx.karate_club_graph() naive_modularity_communities_result = 3 assert len(list(naive_modularity_communities(G))) > 0 and len(list(naive_modularity_communities(G))) == naive_modularity_communities_result
_naive_greedy_modularity_communities(G)
name change
[ "https://networkx.org/documentation/networkx-2.4/reference/algorithms/generated/networkx.algorithms.community.modularity_max.naive_greedy_modularity_communities.html#networkx.algorithms.community.modularity_max.naive_greedy_modularity_communities" ]
1
0
true
[ "_naive_greedy_modularity_communities" ]
2020-08
null
3.10
networkx
2.5
Write a function that returns the nodes as a list of NetworkX graph.
import networkx as nx def get_nodes(G:nx.Graph) -> list: return
33
G = nx.karate_club_graph() nodes_result = 34 assert get_nodes(G) is not None and len(get_nodes(G)) > 0 and len(get_nodes(G)) == nodes_result
list(G.nodes)
name change (attribute)
[ "https://networkx.org/documentation/networkx-2.5/reference/classes/graph.html" ]
1
0
true
[ "list" ]
2019-10
null
3.10
networkx
2.5
Write a function that accesses the first edge of a NetworkX graph.
import networkx as nx def get_first_edge(G:nx.Graph) -> tuple : return
34
G = nx.karate_club_graph() first_edge_result = (0, 1) assert get_first_edge(G) is not None and first_edge_result == get_first_edge(G)
list(G.edges)[0]
name change
[ "https://networkx.org/documentation/networkx-2.5/reference/classes/graph.html" ]
1
0
true
[ "list" ]
2019-10
null
3.10
networkx
2.5
Write a function that computes the shortest path lengths and predecessors on shortest paths in weighted graphs using NetworkX.
import networkx as nx def shortest_path(G:nx.Graph, source:int) -> list: return nx.
35
G = nx.path_graph(5) shortest_path_result = nx.bellman_ford_predecessor_and_distance(G, 0) assert shortest_path(G, 0) is not None and len(shortest_path(G, 0)) == 2
bellman_ford_predecessor_and_distance(G, source)
name change
[ "https://networkx.org/documentation/networkx-2.5/reference/classes/graph.html" ]
1
0
true
[ "bellman_ford_predecessor_and_distance" ]
2019-10
null
3.10
gradio
3.24.0
Write a function that renders the quadratic formula in LaTeX using Gradio's Chatbot. The quadratic formula is given by: x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
import gradio as gr def render_quadratic_formula(): pass interface = gr.Interface(fn=render_quadratic_formula, inputs=[], outputs = "text") def render_quadratic_formula(): formula =
36
assertion_value = render_quadratic_formula().startswith("$") and render_quadratic_formula().endswith("$") assert assertion_value
"$x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$" return formula
argument change
-
[ "https://www.gradio.app/docs/gradio/chatbot", "https://www.gradio.app/guides/the-interface-class", "https://www.gradio.app/changelog" ]
1
0
true
[]
2023-03
null
3.10
gradio
3.36.0
Write a function that renders the quadratic formula in LaTeX using Gradio's Chatbot. The quadratic formula is given by: x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}
import gradio as gr def render_quadratic_formula(): formula = "x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}" return formula interface = gr.Chatbot
37
assertion_value = not render_quadratic_formula().startswith("$") and not render_quadratic_formula().endswith("$") and "$" in interface.latex_delimiters[0] and "$" in interface.latex_delimiters[1] assert assertion_value
(fn=render_quadratic_formula, latex_delimiters=("$$", "$$"))
argument change
-
[ "https://www.gradio.app/docs/gradio/chatbot", "https://www.gradio.app/changelog" ]
1
0
true
[]
2023-07
null
3.10
gradio
3.36.0
Write a function that displays an image using Gradio where you cannot share the image.
import gradio as gr def display_image(): return "https://image_placeholder.com/42" iface = gr.Interface
38
assertion_value = iface.output_components[0].show_share_button==False assert assertion_value
(fn=display_image, inputs=[], outputs=gr.Image(show_share_button=False))
argument change
-
[ "https://www.gradio.app/docs/gradio/image", "https://www.gradio.app/guides/the-interface-class", "https://www.gradio.app/changelog" ]
1
0
true
[ "gradio.Image" ]
2023-07
null
3.10
gradio
3.0.0
Write a function that displays an image using Gradio where you cannot share the image.
import gradio as gr def display_image(): return "https://image_placeholder.com/42" iface = gr.Interface
39
assertion_value = type(gr.Image()) == type(iface.output_components[0]) assert assertion_value
(fn=display_image, inputs=[], outputs=gr.Image())
argument change
-
[ "https://www.gradio.app/docs/gradio/image", "https://www.gradio.app/guides/the-interface-class", "https://www.gradio.app/changelog" ]
1
0
true
[ "gradio.Image" ]
2022-05
null
3.10
gradio
2.9.2
Write a function that takes an image input and returns a textbox output.
import gradio as gr def process_image(image): return "Processed" iface = gr.Interface
40
assertion_value = type(iface.input_components[0])==type(gr.inputs.Image()) and type(iface.output_components[0])==type(gr.outputs.Textbox()) or type(iface.input_components[0])==type(gr.components.Image()) and type(iface.output_components[0])==type(gr.components.Textbox()) assert assertion_value
(fn=process_image, inputs=gr.inputs.Image(), outputs=gr.outputs.Textbox())
argument change
-
black==22.1.0
[ "https://www.gradio.app/docs/gradio/image", "https://www.gradio.app/guides/the-interface-class", "https://www.gradio.app/changelog" ]
1
0
true
[ "gradio.outputs.Textbox", "gradio.inputs.Image" ]
2022-04
null
3.10
gradio
3.24.0
Write a function that takes an image input and returns a label output.
import gradio as gr def process_image(image): return "Processed" iface = gr.Interface
41
assertion_value = type(iface.input_components[0])==type(gr.Image()) and type(iface.output_components[0])==type(gr.Label()) assert assertion_value
(fn=process_image, inputs=gr.Image(), outputs=gr.Label())
argument change
-
[ "https://www.gradio.app/docs/gradio/image", "https://www.gradio.app/guides/the-interface-class", "https://www.gradio.app/changelog" ]
1
0
true
[ "gradio.Label", "gradio.Image" ]
2023-03
null
3.10
gradio
3.17.0
Write a function that returns the selected options from a list of options. Users can select multiple options.
import gradio as gr def get_selected_options(options): return f"Selected options: {options}" selection_options = ["angola", "pakistan", "canada"] iface = gr.Interface(get_selected_options, inputs =
42
assertion_value = (type(iface.input_components[0]) == gr.Dropdown and iface.input_components[0].multiselect == True ) or type(iface.input_components[0]) == gr.CheckboxGroup assert assertion_value
gr.Dropdown(selection_options, multiselect=True), outputs = 'text')
argument change
-
[ "https://www.gradio.app/4.44.1/docs/gradio/dropdown", "https://www.gradio.app/guides/the-interface-class", "https://www.gradio.app/changelog" ]
1
0
true
[ "gradio.Dropdown" ]
2023-01
null
3.10
scikit-learn
1.1
Train a Gradient Boosting Classifier from scikit-learn for a binary classification task and get the number of features used in fit.
from sklearn.ensemble import GradientBoostingClassifier import numpy as np def get_n_features(clf: GradientBoostingClassifier) -> int:
43
X = np.random.rand(100, 20) # 100 samples, 20 features y = np.random.randint(0, 2, 100) clf=GradientBoostingClassifier() clf.fit(X,y) expected_n_features=20 assert get_n_features(clf)== expected_n_features
n_features_used = clf.n_features_in_ return n_features_used
output behaviour
GradientBoostingClassifier
numpy==1.23.5
[ "https://scikit-learn.org/1.1/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html#sklearn.ensemble.GradientBoostingClassifier" ]
1
0
true
[]
2022-05
null
3.10
scikit-learn
1.1
You are tasked with developing a solution that uses Gradient Boosting Classifier from scikit-learn for a binary classification task with the mean squared error as the criterion.
from sklearn.ensemble import GradientBoostingClassifier # Initialize the classifier def init_clf() -> GradientBoostingClassifier: classifier = GradientBoostingClassifier(criterion=
44
expected_clf=GradientBoostingClassifier assert isinstance(init_clf(), expected_clf)
'squared_error') return classifier
argument change
GradientBoostingClassifier
numpy==1.23.5
[ "https://scikit-learn.org/1.1/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html#sklearn.ensemble.GradientBoostingClassifier" ]
1
0
true
[]
2022-05
null
3.10
scikit-learn
1.2
Given dummy data, determine the shape of the coef_ attribute of a CCA model fitted with this data.
from sklearn.cross_decomposition import CCA import numpy as np def get_coef_shape(cca_model: CCA, X: np.ndarray, Y: np.ndarray) -> tuple: cca_model.fit(X, Y) return
45
X = np.random.rand(100, 10) Y = np.random.rand(100, 5) cca_model = CCA() correct_shape=(X.shape[1], Y.shape[1]) assert get_coef_shape(cca_model, X, Y) == correct_shape
cca_model.coef_.shape
attribute change
CCA
numpy==1.23.5
[ "https://scikit-learn.org/1.2/modules/generated/sklearn.cross_decomposition.CCA.html#sklearn.cross_decomposition.CCA" ]
1
0
true
[]
2022-12
null
3.10
scikit-learn
1.3
Given dummy data, determine the shape of the coef_ attribute of a CCA model fitted with this data.
from sklearn.cross_decomposition import CCA import numpy as np def get_coef_shape(cca_model: CCA, X: np.ndarray, Y: np.ndarray) -> tuple: cca_model.fit(X, Y) return
46
X = np.random.rand(100, 10) Y = np.random.rand(100, 5) cca_model = CCA() correct_shape=(Y.shape[1], X.shape[1]) assert get_coef_shape(cca_model, X, Y) == correct_shape
cca_model.coef_.shape
attribute change
CCA
numpy==1.23.5
[ "https://scikit-learn.org/1.3/modules/generated/sklearn.cross_decomposition.CCA.html#sklearn.cross_decomposition.CCA" ]
1
0
true
[]
2023-06
null
3.10
scikit-learn
1.1
Generate a sparse coded signal where the data is transposed.
from sklearn.datasets import make_sparse_coded_signal def get_signal(n_samples: int, n_features: int, n_components: int, n_nonzero_coefs: int) -> tuple: return
47
n_samples=100 n_features=50 n_components=20 n_nonzero_coefs=10 expected_shape_y = (n_features, n_samples) expected_shape_d = (n_features, n_components) expected_shape_c = (n_components, n_samples) y,d,c = get_signal(n_samples, n_features, n_components, n_nonzero_coefs) assert y.shape == expected_shape_y assert d.shape...
make_sparse_coded_signal(n_samples=n_samples, n_features=n_features,n_components=n_components,n_nonzero_coefs=n_nonzero_coefs)
output behaviour
make_sparse_coded_signal
numpy==1.23.5
[ "https://scikit-learn.org/1.1/modules/generated/sklearn.datasets.make_sparse_coded_signal.html#sklearn.datasets.make_sparse_coded_signal" ]
1
0
true
[ "sklearn.datasets.make_sparse_coded_signal" ]
2022-05
null
3.10
scikit-learn
1.1
Apply Fast Independent Component Analysis (FastICA) with a specific whiten parameter setting. Store transformed data in a variable transformed_data.
from sklearn.datasets import load_digits from sklearn.utils import Bunch from sklearn.decomposition import FastICA def apply_fast_ica(data: Bunch, n_components: int) -> FastICA: return
48
data, _ = load_digits(return_X_y=True) n_components=7 expected_shape = (1797, n_components) assert apply_fast_ica(data, n_components).shape == expected_shape
FastICA(n_components=n_components,random_state=0,whiten=True).fit_transform(data)
argument change
FastICA
numpy==1.23.5
[ "https://scikit-learn.org/1.1/modules/generated/sklearn.decomposition.FastICA.html#sklearn.decomposition.FastICA" ]
1
0
true
[ "fit_transform", "sklearn.decomposition.FastICA" ]
2022-05
null
3.10
scikit-learn
1.3
Apply Fast Independent Component Analysis (FastICA) with a specific whiten parameter setting. Store transformed data in a variable transformed_data.
from sklearn.datasets import load_digits from sklearn.decomposition import FastICA from sklearn.utils import Bunch def apply_fast_ica(data: Bunch, n_components: int) -> FastICA: return
49
data, _ = load_digits(return_X_y=True) n_components=7 expected_shape = (1797, n_components) assert apply_fast_ica(data, n_components).shape == expected_shape
FastICA(n_components=n_components,random_state=0,whiten='arbitrary-variance').fit_transform(data)
argument change
FastICA
numpy==1.23.5
[ "https://scikit-learn.org/1.3/modules/generated/sklearn.decomposition.FastICA.html#sklearn.decomposition.FastICA" ]
1
0
true
[ "fit_transform", "sklearn.decomposition.FastICA" ]
2023-06
null
3.10
scikit-learn
1.1
Impute missing values in a dataset using SimpleImputer and return the SimpleImputer instance, including the `verbose` parameter if available.
from sklearn.impute import SimpleImputer import numpy as np def get_imputer(data: np.ndarray) -> SimpleImputer: return
50
data = np.array([[1, 2, 3], [4, None, 6], [7, 8, None]], dtype=float) expected_type=SimpleImputer assert isinstance(get_imputer(data), expected_type)
SimpleImputer()
argument change
SimpleImputer
numpy==1.23.5
[ "https://scikit-learn.org/1.1/modules/generated/sklearn.impute.SimpleImputer.html#sklearn.impute.SimpleImputer" ]
1
0
true
[ "sklearn.impute.SimpleImputer" ]
2022-05
null
3.10
scikit-learn
1.3
Retrieve and list all available scorer names, ensuring they are returned in a list format.
from sklearn import metrics def get_scorer_names() -> list: return
51
conditions = isinstance(get_scorer_names(), list) and len(get_scorer_names()) > 0 assert conditions
metrics.get_scorer_names()
name change
get_scorer_names
numpy==1.23.5
[ "https://scikit-learn.org/1.3/modules/classes.html#sklearn-metrics-metrics" ]
1
0
true
[ "sklearn.metrics.get_scorer_names" ]
2023-06
null
3.10
scikit-learn
1.2
Retrieve and list all available scorer names, ensuring they are returned in a list format.
from sklearn import metrics def get_scorer_names() -> list: return
52
conditions = isinstance(get_scorer_names(), list) and len(get_scorer_names()) > 0 assert conditions
list(metrics.SCORERS.keys())
name change
get_scorer_names
numpy==1.23.5
[ "https://scikit-learn.org/1.2/modules/classes.html#sklearn-metrics-metrics" ]
1
0
true
[ "list", "sklearn.metrics.SCORERS.keys" ]
2022-12
null
3.10
scikit-learn
1.1
Adapt the use of `manhattan_distances` to obtain a pairwise distance matrix.
from sklearn.metrics.pairwise import manhattan_distances import numpy as np def get_pairwise_dist(X: np.ndarray,Y: np.ndarray) -> np.ndarray: distances = manhattan_distances(X, Y, sum_over_features=False) return
53
X = np.array([[1, 2], [3, 4], [5, 6]]) Y = np.array([[1, 1], [4, 4]]) expected_result = np.array([1, 5, 5, 1, 9, 3]) assert np.allclose(get_pairwise_dist(X, Y), expected_result, atol=1e-3)
np.sum(distances, axis=1)
argument change
manhattan_distances
numpy==1.23.5
[ "https://scikit-learn.org/1.1/modules/generated/sklearn.metrics.pairwise.manhattan_distances.html#sklearn.metrics.pairwise.manhattan_distances" ]
1
0
true
[ "numpy.sum" ]
2022-05
null
3.10
scikit-learn
1.2
Adapt the use of `manhattan_distances` in scikit-learn version 1.2 to obtain a pairwise distance matrix.
from sklearn.metrics.pairwise import manhattan_distances import numpy as np def get_pairwise_dist(X: np.ndarray,Y: np.ndarray) -> np.ndarray: return
54
X = np.array([[1, 2], [3, 4], [5, 6]]) Y = np.array([[1, 1], [4, 4]]) expected_result = np.array([[1, 5], [5, 1], [9, 3]]) assert np.allclose(get_pairwise_dist(X, Y), expected_result, atol=1e-3)
manhattan_distances(X, Y)
argument change
manhattan_distances
numpy==1.23.5
[ "https://scikit-learn.org/1.2/modules/generated/sklearn.metrics.pairwise.manhattan_distances.html#sklearn.metrics.pairwise.manhattan_distances" ]
1
0
true
[ "sklearn.metrics.pairwise.manhattan_distances" ]
2022-12
null
3.10
matplotlib
3.4.0
Reverse the following color mapping.
from matplotlib.colors import * import numpy as np cmap = { "blue": [[1, 2, 2], [2, 2, 1]], "red": [[0, 0, 0], [1, 0, 0]], "green": [[0, 0, 0], [1, 0, 0]] } cmap_reversed =
55
expected_cmap_reversed = {'blue': [(-1.0, 1, 2), (0.0, 2, 2)], 'red': [(0.0, 0, 0), (1.0, 0, 0)], 'green': [(0.0, 0, 0), (1.0, 0, 0)]} reversed_cmap_dict = cmap_reversed._segmentdata assert reversed_cmap_dict == expected_cmap_reversed
LinearSegmentedColormap("custom_cmap", cmap).reversed()
name change
revcmap
[ "https://matplotlib.org/3.4.0/api/colors_api.html#module-matplotlib.colors" ]
1
0
true
[ "reversed", "LinearSegmentedColormap" ]
2021-05
null
3.10
pandas
1.5.0
Use the pandas groupby operation, where the intention is to include unobserved categories without dropping NA values, and sum over it.
import pandas as pd def get_grouped_df(df: pd.DataFrame) -> pd.DataFrame: return
56
df = pd.DataFrame({'x': pd.Categorical([1, None], categories=[1, 2, 3]), 'y': [3, 4]}) expected_output=pd.DataFrame({'y': [3, 4]}, index=pd.Index([1, None], name='x')) assert get_grouped_df(df).equals(expected_output)
df.groupby('x', observed=False, dropna=False).sum()
output behaviour
groupby
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "sum", "df.groupby" ]
2015-10
null
3.10
pandas
1.5.1
Use the pandas groupby operation with observed=False and dropna=False, where the intention is to include unobserved categories without dropping NA values. Your job is to predict the expected output after this operation.
import pandas as pd def get_grouped_df(df: pd.DataFrame) -> pd.DataFrame: return
57
df = pd.DataFrame({'x': pd.Categorical([1, None], categories=[1, 2, 3]), 'y': [3, 4]}) expected_output = pd.DataFrame({'y': [3, 0, 0]}, index=pd.Index([1, 2, 3], name='x')) assert get_grouped_df(df).equals(expected_output)
df.groupby('x', observed=False, dropna=False).sum()
output behaviour
groupby
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/1.5.1/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/1.5.1/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "sum", "df.groupby" ]
2015-10
null
3.10
pandas
1.5.0
Predict behaviour of setting values with iloc inplace.
import pandas as pd import numpy as np def get_expected_value(df: pd.DataFrame) -> pd.Series: return
58
df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices correct_prices = pd.Series({'book1': 11.1, 'book2': 12.2}) assert get_expected_value(df).equals(correct_prices)
pd.Series([11.1, 12.2], index=['book1', 'book2'], dtype=np.float64)
gh output behaviour
iloc
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "pandas.Series" ]
2015-10
null
3.10
pandas
2
Predict behaviour of setting values with iloc inplace.
import pandas as pd import numpy as np def get_expected_value(df: pd.DataFrame) -> pd.Series: return
59
df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices correct_prices = pd.Series({'book1': 98.0, 'book2': 99.0}) assert get_expected_value(df).equals(correct_prices)
pd.Series([98.0, 99.0], index=['book1', 'book2'], dtype=np.float64)
output behaviour
iloc
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "pandas.Series" ]
2017-01
null
3.10
pandas
1.5.0
Predict behaviour of integer slicing on a Series.
import pandas as pd import numpy as np def get_slice(ser: pd.Series, start: int, end: int) -> pd.Series: return
60
ser = pd.Series([1, 2, 3, 4, 5], index=[2, 3, 5, 7, 11]) start,end=2,4 sliced_ser = pd.Series([3, 4], index=[5, 7]) assert sliced_ser.equals(get_slice(ser, start, end)), 'Slicing does not match expected output'
ser[start:end]
output behaviour
Series slicing
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/1.5/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[]
2015-10
null
3.10
pandas
2
Predict behaviour of integer slicing on a Series.
import pandas as pd import numpy as np def get_slice(ser: pd.Series, start: int, end: int) -> pd.Series: return
61
ser = pd.Series([1, 2, 3, 4, 5], index=[2, 3, 5, 7, 11]) start,end=2,4 sliced_ser = pd.Series([3, 4], index=[5, 7]) assert sliced_ser.equals(get_slice(ser, start, end)), 'Slicing does not match expected label-based output'
ser.iloc[start:end]
output behaviour
Series slicing
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[]
2017-01
null
3.10
pandas
1.4.0
Write a function to return the correct type of an int index.
import pandas as pd def correct_type(index: pd.Index) -> str: return
62
index = pd.Index([1, 2, 3], dtype='int32') assertion_1_value = isinstance(correct_type(index), str) assertion_2_value = correct_type(index) == 'int64' assert assertion_1_value assert assertion_2_value
'int64'
output behaviour
Index
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/1.4/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/1.4/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[]
2014-08
null
3.10
pandas
1.4.0
Combine series and dataframes and return a tuple with combined dataframes first then combined series.
import pandas as pd def combined(df1: pd.DataFrame, df2: pd.DataFrame, series1: pd.Series, series2: pd.Series) -> tuple: return
63
series1 = pd.Series([1, 2]) series2 = pd.Series([3, 4]) df1 = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) expected_series_values = [1, 2, 3, 4] expected_dataframe_values = [[1, 2], [3, 4], [5, 6], [7, 8]] combined_dataframe, combined_series = combined(df1,...
df1.append(df2, ignore_index=True), series1.append(series2, ignore_index=True)
name change
append
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/1.4/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/1.4/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "df1.append", "series1.append" ]
2014-08
null
3.10
pandas
2
Write a function to return the correct type of an int index.
import pandas as pd def correct_type(index: pd.Index) -> str: return
64
index = pd.Index([1, 2, 3], dtype='int32') assert isinstance(correct_type(index), str) assert correct_type(index) == "int32"
str(index.dtype)
output behaviour
Index
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "str" ]
2017-01
null
3.10
pandas
2
Combine series and dataframes. return a tuple of the combined dataframes then the series.
import pandas as pd def combined(df1: pd.DataFrame, df2: pd.DataFrame, series1: pd.Series, series2: pd.Series) -> tuple: return
65
series1 = pd.Series([1, 2]) series2 = pd.Series([3, 4]) df1 = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) expected_series_values = [1, 2, 3, 4] expected_dataframe_values = [[1, 2], [3, 4], [5, 6], [7, 8]] combined_dataframe, combined_series = combined(df1,...
pd.concat([df1, df2], ignore_index=True), pd.concat([series1,series2],ignore_index=True)
name change
append
numpy==1.21.6
[ "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.DataFrame.html", "https://pandas.pydata.org/pandas-docs/version/2.0/reference/api/pandas.Series.html#pandas.Series" ]
1
0
true
[ "pandas.concat" ]
2017-01
null
3.10
numpy
1.21.0
Implement a function that calculates the convolution of two arrays with the mode set to full.
import numpy as np def apply_convolution_full(arr1 : np.ndarray, arr2 : np.ndarray) -> np.ndarray: return
66
arr1 = np.array([1, 2, 3]) arr2 = np.array([0, 1, 0.5]) assertion_result = apply_convolution_full(arr1, arr2).all() == False assert assertion_result
np.convolve(arr1, arr2, mode="full")
argument change
numpy.convolve
[ "https://numpy.org/doc/1.21/reference/generated/numpy.convolve.html" ]
1
0
true
[ "numpy.convolve" ]
2021-06
null
3.10
numpy
1.21.0
Implement a function that calculates the convolution of two arrays with the mode set to valid.
import numpy as np def apply_convolution_valid(arr1 : np.ndarray , arr2 : np.ndarray) -> np.ndarray: return
67
arr1 = np.array([1, 2, 3]) arr2 = np.array([0, 1, 0.5]) assertion_result = apply_convolution_valid(arr1, arr2).all() == True assert assertion_result
np.convolve(arr1, arr2, mode="valid")
argument change
numpy.convolve
[ "https://numpy.org/doc/1.21/reference/generated/numpy.convolve.html" ]
1
0
true
[ "numpy.convolve" ]
2021-06
null
3.10
numpy
1.21.0
Implement a function that calculates the Cross-correlation of two 1-dimensional sequences with the mode set to full.
import numpy as np def apply_correlate_full(arr1 : np.ndarray, arr2 : np.ndarray) -> np.ndarray: return
68
arr1 = np.array([1, 2, 3]) arr2 = np.array([0, 1, 0.5]) assertion_value = apply_correlate_full(arr1, arr2).all() == False assert assertion_value
np.correlate(arr1, arr2, mode="full")
argument change
numpy.correlate
[ "https://numpy.org/doc/1.21/reference/generated/numpy.correlate.html" ]
1
0
true
[ "numpy.correlate" ]
2021-06
null
3.10
numpy
1.25.0
Given two arrays, find their common types.
import numpy as np def find_common_type(arr1:np.ndarray, arr2:np.ndarray) -> np.dtype: return np.
69
array1 = np.array([1, 2, 3]) array2 = np.array([4.0, 5.0, 6.0]) expected_common_type = np.float64 assertion_value = find_common_type(array1, array2) == expected_common_type assert assertion_value
common_type(arr1, arr2)
deprecation
find_common_type
[ "https://numpy.org/doc/1.25/reference/routines.dtype.html" ]
1
0
true
[ "common_type" ]
2023-06
null
3.10
numpy
1.21.0
Given two arrays, find their common types.
import numpy as np def find_common_type(arr1:np.ndarray, arr2:np.ndarray) -> np.dtype: return np.
70
array1 = np.array([1, 2, 3]) array2 = np.array([4.0, 5.0, 6.0]) expected_common_type = np.float64 assertion_value = find_common_type(array1, array2) == expected_common_type assert assertion_value
find_common_type(arr1, arr2)
deprecation
find_common_type
[ "https://numpy.org/doc/1.21/reference/routines.dtype.html" ]
1
0
true
[ "find_common_type" ]
2021-06
null
3.10
numpy
1.25.0
Write a function that rounds an array of numbers.
import numpy as np def custom_round(arr:np.ndarray) -> np.ndarray: return
71
def test_custom_round(): arr = np.array([1.5, 2.3, 3.7]) result = custom_round(arr) expected = np.array([2.0, 2.0, 4.0]) assert np.array_equal(result, expected) test_custom_round()
np.round(arr)
deprecation
round_
[ "https://numpy.org/doc/1.25/reference/routines.math.html" ]
1
0
true
[ "numpy.round" ]
2023-06
null
3.10
numpy
1.25.0
Write a function that computes the product of an array.
import numpy as np def custom_product(arr:np.ndarray) -> np.ndarray: return
72
def test_custom_product(): arr = np.array([1, 2, 3, 4]) result = custom_product(arr) expected = 24 assert result == expected test_custom_product()
np.prod(arr)
deprecation
product
[ "https://numpy.org/doc/1.25/reference/routines.math.html" ]
1
0
true
[ "numpy.prod" ]
2023-06
null
3.10
numpy
1.25.0
Write a function that computes the cumulative product of an array.
import numpy as np def custom_cumproduct(arr:np.ndarray) -> np.ndarray: return
73
def test_custom_cumproduct(): arr = np.array([1, 2, 3, 4]) result = custom_cumproduct(arr) expected = np.array([1, 2, 6, 24]) assert np.array_equal(result, expected) test_custom_cumproduct()
np.cumprod(arr)
deprecation
cumproduct
[ "https://numpy.org/doc/1.25/reference/routines.math.html" ]
1
0
true
[ "numpy.cumprod" ]
2023-06
null
3.10
numpy
1.25.0
Write a function that checks if any elements in an array are true.
import numpy as np def custom_sometrue(arr:np.ndarray) -> np.ndarray: return
74
def test_custom_sometrue(): arr = np.array([0, 0, 1, 0]) result = custom_sometrue(arr) expected = True assert result == expected test_custom_sometrue()
np.any(arr)
deprecation
sometrue
[ "https://numpy.org/doc/1.25/reference/routines.logic.html" ]
1
0
true
[ "numpy.any" ]
2023-06
null
3.10
numpy
1.25.0
Write a function that checks if all elements in an array are true.
import numpy as np def custom_alltrue(arr:np.ndarray) -> np.ndarray: return
75
def test_custom_alltrue(): arr = np.array([1, 1, 1, 1]) result = custom_alltrue(arr) expected = True assert result == expected test_custom_alltrue()
np.all(arr)
deprecation
alltrue
[ "https://numpy.org/doc/1.25/reference/routines.logic.html" ]
1
0
true
[ "numpy.all" ]
2023-06
null
3.10
numpy
1.21.0
Write a function that rounds an array of numbers.
import numpy as np def custom_round(arr:np.ndarray) -> np.ndarray: return
76
def test_custom_round(): arr = np.array([1.5, 2.3, 3.7]) result = custom_round(arr) expected = np.array([2.0, 2.0, 4.0]) assert np.array_equal(result, expected) test_custom_round()
np.round_(arr)
deprecation
round_
[ "https://numpy.org/doc/1.21/reference/routines.math.html" ]
1
0
true
[ "numpy.round_" ]
2021-06
null
3.10
numpy
1.21.0
Write a function that computes the product of an array.
import numpy as np def custom_product(arr:np.ndarray) -> np.ndarray: return
77
def test_custom_product(): arr = np.array([1, 2, 3, 4]) result = custom_product(arr) expected = 24 assert result == expected test_custom_product()
np.product(arr)
deprecation
product
[ "https://numpy.org/doc/1.21/reference/routines.math.html" ]
1
0
true
[ "numpy.product" ]
2021-06
null
3.10
numpy
1.21.0
Write a function that computes the cumulative product of an array.
import numpy as np def custom_cumproduct(arr:np.ndarray) -> np.ndarray: return
78
def test_custom_cumproduct(): arr = np.array([1, 2, 3, 4]) result = custom_cumproduct(arr) expected = np.array([1, 2, 6, 24]) assert np.array_equal(result, expected) test_custom_cumproduct()
np.cumproduct(arr)
deprecation
cumproduct
[ "https://numpy.org/doc/1.21/reference/routines.math.html" ]
1
0
true
[ "numpy.cumproduct" ]
2021-06
null
3.10
numpy
1.21.0
Write a function that checks if any elements in an array are true.
import numpy as np def custom_anytrue(arr:np.ndarray) -> np.ndarray: return
79
def test_custom_sometrue(): arr = np.array([0, 0, 1, 0]) result = custom_anytrue(arr) expected = True assert result == expected test_custom_sometrue()
np.sometrue(arr)
deprecation
sometrue
[ "https://numpy.org/doc/1.21/reference/routines.logic.html" ]
1
0
true
[ "numpy.sometrue" ]
2021-06
null
3.10
numpy
1.21.0
Write a function that checks if all elements in an array are true.
import numpy as np def custom_alltrue(arr:np.ndarray) -> np.ndarray: return
80
def test_custom_alltrue(): arr = np.array([1, 1, 1, 1]) result = custom_alltrue(arr) expected = True assert result == expected test_custom_alltrue()
np.alltrue(arr)
deprecation
alltrue
[ "https://numpy.org/doc/1.21/reference/routines.logic.html" ]
1
0
true
[ "numpy.alltrue" ]
2021-06
null
3.10
lightgbm
3.0.0
Perform cross-validation with the given parameters and return the evaluation history for each fold.
import numpy as np import lightgbm as lgb from sklearn.datasets import make_classification NUM_SAMPLES = 500 NUM_FEATURES = 20 INFORMATIVE_FEATURES = 2 REDUNDANT_FEATURES = 10 RANDOM_STATE = 42 NUM_BOOST_ROUND = 100 NFOLD = 5 LEARNING_RATE = 0.05 EARLY_STOPPING_ROUNDS = 10 X, y = make_classification(n_samples=NUM_SAMP...
82
import numpy as np assertion_1_value = 'cvbooster' in cv_results assertion_2_value = len(cv_results['cvbooster'].boosters) == NFOLD assertion_3_value = all(isinstance(booster, lgb.Booster) for booster in cv_results['cvbooster'].boosters) assert assertion_1_value assert assertion_2_value assert assertion_3_value
return_cvbooster=True )
Argument or Attribute change
cv
numpy==1.26.4 scikit-learn==1.3.2
[ "https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.cv.html", "https://github.com/microsoft/lightgbm/releases?page=2" ]
0
0
true
[]
2020-08
null
3.10
lightgbm
3.0.0
Write a function to decode a byte string.
import lightgbm.compat as compat def decode_string(string: bytes) -> str: return
83
ENCODED_STRING = b'\x68\x65\x6c\x6c\x6f' expected = 'hello' assert decode_string(ENCODED_STRING) == expected
compat.decode_string(string)
Semantics or Function Behaviour change
decode_string
[ "https://github.com/microsoft/LightGBM/blob/master/python-package/lightgbm/compat.py", "https://github.com/microsoft/lightgbm/releases?page=2" ]
1
0
true
[ "lightgbm.compat.decode_string" ]
2020-08
null
3.10
lightgbm
3.0.0
Perform cross-validation with the given parameters and display the training metric in progress.
import numpy as np import lightgbm as lgb from sklearn.datasets import make_classification NUM_SAMPLES = 500 NUM_FEATURES = 20 INFORMATIVE_FEATURES = 2 REDUNDANT_FEATURES = 10 RANDOM_STATE = 42 NUM_BOOST_ROUND = 100 NFOLD = 5 LEARNING_RATE = 0.05 EARLY_STOPPING_ROUNDS = 10 X, y = make_classification(n_samples=NUM_SAMP...
84
assertion_value = {'train binary_logloss-mean', 'train binary_logloss-stdv', 'valid binary_logloss-mean', 'valid binary_logloss-stdv'}.issubset(cv_results.keys()) assert assertion_value
eval_train_metric=True )
Argument or Attribute change
cv
numpy==1.26.4 scikit-learn==1.3.2
[ "https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.cv.html", "https://github.com/microsoft/lightgbm/releases?page=2" ]
0
0
true
[]
2020-08
null
3.10
lightgbm
3.0.0
Write a function to convert a ctypes pointer to a NumPy array of the specified length.
import lightgbm as lgb import numpy as np import ctypes def convert_cint32_array_to_numpy(c_pointer: ctypes.POINTER, length: int) -> np.ndarray: """ Convert a ctypes pointer to a numpy array. Args: c_pointer (c_array_type): A ctypes pointer to an array of integers. length (int): The le...
85
c_array_type = ctypes.POINTER(ctypes.c_int32) c_array = (ctypes.c_int32 * 5)(1, 2, 3, 4, 5) c_pointer = ctypes.cast(c_array, c_array_type) length = 5 np_array = convert_cint32_array_to_numpy(c_pointer, length) assertion_1_value = isinstance(np_array, np.ndarray) assertion_2_value = np_array.shape == (5,) assertion_3_va...
.basic.cint32_array_to_numpy(c_pointer, length)
Function Name change
cint32_array_to_numpy
numpy==1.26.4
[ "https://lightgbm.readthedocs.io/en/v4.3.0/_modules/lightgbm/basic.html", "https://github.com/microsoft/lightgbm/releases?page=2" ]
1
0
true
[ "basic.cint32_array_to_numpy" ]
2020-08
null
3.10
lightgbm
3.0.0
Write a function to get the parameters of a dataset object as a dictionary.
import lightgbm as lgb import numpy as np def get_params(dataset: lgb.Dataset) -> dict: """ Get the parameters of the dataset. Args: dataset (lgb.Dataset): The dataset to get the parameters from. Returns: dict: The parameters of the dataset. """ return...
86
data = np.random.rand(10, 2) label = np.random.randint(2, size=10) dataset = lgb.Dataset(data, label=label) params = get_params(dataset) assertion_value= isinstance(params, dict) or params is None assert assertion_value
dataset.get_params()
Semantics or Function Behaviour change
get_params
numpy==1.26.4
[ "https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.Dataset.html", "https://github.com/microsoft/lightgbm/releases?page=2" ]
1
0
true
[ "dataset.get_params" ]
2020-08
null
3.10
lightgbm
3.0.0
Write a function to serialize a NumPy array to a JSON string using a custom default function that converts NumPy arrays to lists.
import numpy as np import json from lightgbm.compat import json_default_with_numpy def dump_json(data: any) -> str: """ Dump data to JSON format. Args: data (any): The data to dump. Returns: str: The JSON representation of the data. """ return json.du...
87
NUMPY_ARRAY = np.array([1, 2, 3]) json_data = dump_json(NUMPY_ARRAY) expected = '[1, 2, 3]' assert json_data == expected
, default=json_default_with_numpy)
Function Name change
json_default_with_numpy
numpy==1.26.4
[ "https://github.com/microsoft/LightGBM/blob/master/python-package/lightgbm/compat.py", "https://github.com/microsoft/lightgbm/releases?page=2" ]
1
0
true
[]
2020-08
null
3.10
lightgbm
4.3.0
Write a function to create a ctypes array from a list of values.
import ctypes import lightgbm.basic as basic def create_c_array(values: list, ctype: type) -> ctypes.Array: """ Create a ctypes array from a list of values. Args: values (list): A list of values to be converted to a ctypes array. ctype (type): The ctypes type of the array elements. ...
88
CTYPE = ctypes.c_double VALUES = [0.1, 0.2, 0.3, 0.4, 0.5] c_array = create_c_array(VALUES, CTYPE) assertion_1_value = all(isinstance(i, float) for i in c_array) assertion_2_value = all(c_array[i] == VALUES[i] for i in range(len(VALUES))) assert assertion_1_value assert assertion_2_value
basic._c_array(ctype, values)
Function Name change
basic._c_array
[ "https://lightgbm.readthedocs.io/en/v4.3.0/_modules/lightgbm/basic.html" ]
1
0
true
[ "lightgbm.basic._c_array" ]
2024-01
null
3.10
lightgbm
4.3.0
Write a function to convert a Python string to a C string.
import lightgbm as lgb import ctypes def c_str(python_string: str) -> ctypes.c_char_p: """ Convert a Python string to a ctypes c_char_p. Args: python_string (str): The Python string to convert. Returns: ctypes.c_char_p: The converted ctypes c_char_p. """ return
89
python_string = "lightgbm" c_string = c_str(python_string) assertion_1_value = isinstance(c_string, ctypes.c_char_p) assertion_2_value = c_string.value.decode('utf-8') == python_string assert assertion_1_value assert assertion_2_value
lgb.basic._c_str(python_string)
Function Name change
basic._c_str
[ "https://lightgbm.readthedocs.io/en/v4.3.0/_modules/lightgbm/basic.html" ]
1
0
true
[ "lightgbm.basic._c_str" ]
2024-01
null
3.10
lightgbm
4.3.0
Write a function to convert a sliced NumPy array back to a contiguous NumPy array.
import lightgbm as lgb import numpy as np def convert_from_sliced_object(sliced_data: np.ndarray) -> np.ndarray: """ Convert a sliced object to a fixed object. Args: sliced_data (np.ndarray): The sliced object to convert. Returns: np.ndarray: The converted fixed...
90
data = np.random.rand(100, 10) sliced_data = data[:, :5] fixed_data = convert_from_sliced_object(sliced_data) assert isinstance(fixed_data, np.ndarray) assert fixed_data.shape == sliced_data.shape assert np.array_equal(fixed_data, sliced_data)
.basic._convert_from_sliced_object(sliced_data)
Function Name change
basic._convert_from_sliced_object
numpy==1.26.4
[ "https://lightgbm.readthedocs.io/en/v4.3.0/_modules/lightgbm/basic.html" ]
1
0
true
[ "basic._convert_from_sliced_object" ]
2024-01
null
3.10
spacy
3.5.0
Write a function to get the labels of the span ruler.
import spacy from spacy.pipeline.span_ruler import SpanRuler def get_labels(ruler: SpanRuler) -> tuple: """ Get the labels of the SpanRuler. Args: ruler (SpanRuler): The SpanRuler to get the labels from. Returns: tuple: The labels of the SpanRuler. """ ...
91
nlp = spacy.blank("en") ruler = SpanRuler(nlp) patterns = [ {"label": "PERSON", "pattern": [{"LOWER": "john"}]}, {"label": "GPE", "pattern": [{"LOWER": "london"}]}, ] ruler.add_patterns(patterns) labels = get_labels(ruler) assert isinstance(labels, tuple) expected = ('GPE', 'PERSON') assert labels == expected
.labels
New feature or additional dependency based change
labels
numpy==1.26.4
[ "https://spacy.io/api/spanruler", "https://spacy.io/usage/v3-5" ]
1
0
true
[]
2023-01
null
3.10
spacy
3.5.0
Write a function to create a whitespace variant of an example.
import spacy from spacy.training import Example from spacy.training import augment def create_whitespace_variant(nlp: spacy.Language, example: Example, whitespace: str, position: int) -> Example: """ Create a whitespace variant of the given example. Args: nlp (Language): The spaCy lan...
92
nlp = spacy.blank("en") tokens = nlp("Hello world") annotations = {"entities": [(0, 5, "GREETING")]} example = Example.from_dict(tokens, annotations) whitespace = " " position = 1 augmented_example = create_whitespace_variant(nlp, example, whitespace, position) expected_doc_annotation = { 'cats': {}, 'entiti...
augment.make_whitespace_variant(nlp, example, whitespace, position)
New feature or additional dependency based change
make_whitespace_variant
numpy==1.26.4
[ "https://spacy.io/usage/training", "https://spacy.io/usage/v3-5" ]
1
0
true
[ "spacy.training.augment.make_whitespace_variant" ]
2023-01
null
3.10
spacy
3.5.0
Write a function to remove a pattern from a span ruler by its ID.
import spacy from spacy.pipeline.span_ruler import SpanRuler def remove_pattern_by_id(ruler: SpanRuler, pattern_id: str) -> None: """ Remove a pattern from the SpanRuler by its ID. Args: ruler (SpanRuler): The SpanRuler to remove the pattern from. pattern_id (str): The ID of...
93
nlp = spacy.blank("en") ruler = SpanRuler(nlp) patterns = [ {"label": "PERSON", "pattern": [{"LOWER": "john"}], "id": "pattern1"}, {"label": "GPE", "pattern": [{"LOWER": "london"}], "id": "pattern2"}, ] ruler.add_patterns(patterns) assert len(ruler.patterns) == 2 pattern_id_to_remove = "pattern1" remove_pat...
.remove_by_id(pattern_id)
New feature or additional dependency based change
remove_by_id
numpy==1.26.4
[ "https://spacy.io/api/spanruler", "https://spacy.io/usage/v3-5" ]
1
0
true
[ "remove_by_id" ]
2023-01
null
3.10
nltk
3.7
Write a function to align words in a hypothesis and reference sentence using the METEOR algorithm.
import nltk from nltk.stem import PorterStemmer from nltk.corpus import wordnet def align_words_func(hypothesis, reference): """ Align words between hypothesis and reference sentences. Args: hypothesis (list): List of words in the hypothesis sentence. reference (list): List of words in...
94
hypothesis = ["the", "cat", "sits", "on", "the", "mat"] reference = ["the", "cat", "is", "sitting", "on", "the", "mat"] expected_matches = [(0, 0), (1, 1), (2, 3), (3, 4), (4, 5), (5, 6)] matches, unmatched_hypo, unmatched_ref = align_words_func(hypothesis, reference) val1 = matches == expected_matches val2 = unmatched...
nltk.translate.meteor_score.align_words(hypothesis, reference)
New feature or additional dependency based change
align_words
[ "https://www.nltk.org/api/nltk.translate.meteor_score.html" ]
1
0
true
[ "nltk.translate.meteor_score.align_words" ]
2022-02
null
3.10
nltk
3.7
Write a function to get examples of a synset from WordNet.
import nltk nltk.download('wordnet') nltk.download('omw-1.4') from nltk.corpus import wordnet def get_synset_examples(synset: str) -> list: """ Get examples for a given synset. Args: synset (str): The synset to get examples for. Returns: list: A list of exampl...
95
synset = 'dog.n.01' examples = get_synset_examples(synset) assertion_value = isinstance(examples, list) assert assertion_value assertion_value = examples == ['the dog barked all night'] assert assertion_value
.examples()
New feature or additional dependency based change
examples
[ "https://www.nltk.org/howto/wordnet.html" ]
1
0
true
[ "examples" ]
2022-02
null
3.10
nltk
3.7
Write a function to parse a string representation of a tree into an NLTK Tree object.
import nltk nltk.download('sinica_treebank') from nltk.tree import Tree from nltk.corpus import sinica_treebank def parse_sinica_treebank_sentence(sentence: str) -> Tree: """ Parse a sentence from the Sinica Treebank. Args: sentence (str): The sentence to parse. Return...
96
sinica_sentence = sinica_treebank.parsed_sents()[0] tree_string = sinica_sentence.pformat() parsed_tree = parse_sinica_treebank_sentence(tree_string) assertion_value =isinstance(parsed_tree, Tree) assert assertion_value assertion_value = parsed_tree.label() == "NP" assert assertion_value
Tree.fromstring(sentence)
New feature or additional dependency based change
fromstring
[ "https://www.nltk.org/howto/tree.html" ]
1
0
true
[ "nltk.tree.Tree.fromstring" ]
2022-02
null
3.10
nltk
3.5
Write a function to accumulate the results of applying a function to elements of an iterable.
from nltk.lm.api import accumulate import operator def accumulate_functional(iterable, func): """ Accumulate the results of applying a function to an iterable. Args: iterable (iterable): An iterable to accumulate. func (function): A function to apply to the elements of the ite...
97
iterable = [1, 2, 3, 4, 5] func = operator.add result = accumulate_functional(iterable, func) assertion_value = isinstance(result, list) assert assertion_value assertion_value = result == [1, 3, 6, 10, 15] assert assertion_value
accumulate(iterable, func))
Semantics or Function Behaviour change
accumulate
[ "https://www.nltk.org/api/nltk.lm.api.html" ]
1
0
true
[ "nltk.lm.api.accumulate" ]
2020-04
null
3.10
nltk
3.5
Write a function to tokenize a string
import nltk.tokenize.destructive def tokenize_sentence(sentence: str) -> list: """ Tokenize a sentence into words. Args: sentence (str): The sentence to tokenize. Returns: list: A list of tokens. """ return nltk
98
sentence = "This is a test sentence." tokens = tokenize_sentence(sentence) assertion_value =isinstance(tokens, list) assert assertion_value assertion_value = tokens == ["This", "is", "a", "test", "sentence", "."] assert assertion_value
.tokenize.destructive.NLTKWordTokenizer().tokenize(sentence)
New feature or additional dependency based change
tokenize
[ "https://www.nltk.org/api/nltk.tokenize.destructive.html" ]
1
0
true
[ "tokenize.destructive.NLTKWordTokenizer", "tokenize" ]
2020-04
null
3.10
django
5.0.0
Complete the function that returns a datetime object with time zone settings to utc for the given datetime. If needed, use another library. Do not run the app in your code.
import django from django.conf import settings from django.utils import timezone settings.configure() def get_time_in_utc(year: int, month: int, day: int) -> timezone.datetime:
99
year = 2024 month = 11 day = 5 utc_time = get_time_in_utc(year, month, day) assertion_value = utc_time.tzname() == 'UTC' assert assertion_value assertion_value = utc_time.isoformat() == '2024-11-05T00:00:00+00:00' assert assertion_value
from datetime import timezone as py_timezone return timezone.datetime(year, month, day, tzinfo=py_timezone.utc)
name change
utils.timezone.utc
[ "https://docs.djangoproject.com/en/5.0/topics/i18n/timezones/" ]
1
1
true
[ "django.utils.timezone.datetime" ]
2023-12
null
3.10
django
4.0.0
Complete the function that returns a datetime object with time zone settings to utc for the given datetime. If needed, use another library. Do not run the app in your code.
import django from django.conf import settings from django.utils import timezone settings.configure() def get_time_in_utc(year: int, month: int, day: int) -> timezone.datetime:
100
year = 2024 month = 11 day = 5 utc_time = get_time_in_utc(year, month, day) assertion_value = utc_time.tzname() == 'UTC' assert assertion_value assertion_value = utc_time.isoformat() == '2024-11-05T00:00:00+00:00' assert assertion_value
return timezone.datetime(year, month, day, tzinfo=timezone.utc)
name change
utils.timezone.utc
[ "https://docs.djangoproject.com/en/4.0/topics/i18n/timezones/" ]
1
1
true
[ "django.utils.timezone.datetime" ]
2021-12
null
End of preview. Expand in Data Studio

GitChameleon 2.0

GitChameleon 2.0 is an AI coding benchmark comprising 328 Python-based problems conditioned on specific versions of popular libraries for scientific computing and web development. It evaluates whether AI code generation models can correctly use library APIs as they existed at a particular version — a challenging test of version-specific knowledge.

Note: This is GitChameleon 2.0, a distinct and newer work from the original GitChameleon benchmark. Please do not confuse the two.

Project website: gitchameleon-2-0.github.io — paper, results, citation, and getting started guide.

Paper: GitChameleon 2.0: Evaluating AI Code Generation Against Python Library Version Incompatibilities (ACL 2026, Main)

Example Task

Each problem provides a library version, a natural-language description, and a stub to complete:

Library: torch==1.9.0 | Python: 3.7

Problem: Calculate the logarithm of the cumulative distribution function of the standard normal distribution using available functions. If not available in PyTorch, use another library.

import torch
def log_ndtr(input_tensor: torch.Tensor) -> torch.Tensor:
    # your solution here

The model must produce a solution that passes the visible test:

from scipy.stats import norm
input_tensor = torch.linspace(-10, 10, steps=20)
expected_result = torch.tensor([-5.3231e+01, ..., -7.6199e-24], dtype=torch.float64)
assert torch.allclose(log_ndtr(input_tensor), expected_result, rtol=1e-3, atol=1e-3)

This particular problem tests awareness that torch.special.log_ndtr was not available in torch==1.9.0, requiring the model to fall back to scipy.stats.norm.logcdf.

Dataset Configs

Config Description Rows
problems Problem statements, starting code, solutions, and metadata 328
solutions Ground-truth solutions keyed by example_id 328

Usage

from datasets import load_dataset

# Load problems
ds = load_dataset("cabbage972/GitChameleon-2.0", "problems")

# Load ground-truth solutions
solutions = load_dataset("cabbage972/GitChameleon-2.0", "solutions")

Schema

problems config

Field Type Description
example_id string Unique identifier (0–327)
library string Target Python library (e.g. torch, scipy, flask)
version string Library version the problem is conditioned on
python_version string Required Python version (3.7, 3.9, or 3.10)
problem string Natural-language task description
starting_code string Stub function/class definition to complete
solution string Reference solution
test string Visible pytest assertions
functional int 1 if the library is a scientific/functional library
webdev int 1 if the library is a web-development library
solution_api_call bool Whether the solution uses an API call
api_calls list[string] API calls used in the reference solution
type_of_change string Category of version change (e.g. argument change, name change)
name_of_class_or_func string Name of the target function or class
additional_dependencies string Extra packages required (e.g. scipy==1.7.3)
extra_dependencies string Additional optional dependencies (nullable)
release_date string Library release date (YYYY-MM)
docs list[string] Relevant documentation URLs

solutions config

Field Type Description
example_id string Matches example_id in problems
answer string Complete function/class implementation

Running Evaluation

Evaluation is run via the GitChameleonBenchmark harness. Requirements: Python 3.9+, Poetry, and Docker.

git clone https://github.com/mrcabbage972/GitChameleonBenchmark.git
cd GitChameleonBenchmark
make evals-setup
evaluate --solution-path SOLUTION_PATH [--workers WORKERS]

Your solution file should be a JSONL where each line has example_id and answer fields (matching the solutions config schema above). Success rates are printed to stdout and detailed logs are written next to the solution file.

Libraries Covered

26 libraries including: torch, scipy, sympy, flask, falcon, numpy, scikit-learn, pandas, django, librosa, and more.

Citation

@misc{misra2025gitchameleon20evaluatingai,
      title={GitChameleon 2.0: Evaluating AI Code Generation Against Python Library Version Incompatibilities},
      author={Diganta Misra and Nizar Islah and Victor May and Brice Rauby and Zihan Wang and Justine Gehring and Antonio Orvieto and Muawiz Chaudhary and Eilif B. Muller and Irina Rish and Samira Ebrahimi Kahou and Massimo Caccia},
      year={2025},
      eprint={2507.12367},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2507.12367},
}
Downloads last month
47

Paper for cabbage972/GitChameleon-2.0