关于R:如何连接(合并)数据帧(内部、外部、左侧、右侧)

How to join (merge) data frames (inner, outer, left, right)

给定两个数据帧:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))
df2 = data.frame(CustomerId = c(2, 4, 6), State = c(rep("Alabama", 2), rep("Ohio", 1)))

df1
#  CustomerId Product
#           1 Toaster
#           2 Toaster
#           3 Toaster
#           4   Radio
#           5   Radio
#           6   Radio

df2
#  CustomerId   State
#           2 Alabama
#           4 Alabama
#           6    Ohio

如何执行数据库样式,即SQL样式、联接?也就是说,我如何得到:

  • df1df2的内部连接:仅返回左表在右表中具有匹配键的行。
  • df1df2的外部连接:返回两个表中的所有行,从左侧联接在右侧表中具有匹配键的记录。
  • df1df2的左外部联接(或简单的左联接)。返回左表中的所有行,以及右表中具有匹配键的所有行。
  • df1df2的右外联接。返回右表中的所有行,以及左表中具有匹配键的所有行。

额外贷款:

如何执行SQL样式的select语句?


通过使用merge函数及其可选参数:

内部联接:merge(df1, df2)将适用于这些示例,因为r会自动使用通用变量名联接帧,但您很可能希望指定merge(df1, df2, by ="CustomerId")以确保只匹配所需的字段。如果匹配变量在不同的数据帧中具有不同的名称,也可以使用by.xby.y参数。

外接:merge(x = df1, y = df2, by ="CustomerId", all = TRUE)

左外:merge(x = df1, y = df2, by ="CustomerId", all.x = TRUE)

右外:merge(x = df1, y = df2, by ="CustomerId", all.y = TRUE)

交叉连接:merge(x = df1, y = df2, by = NULL)

与内部联接一样,您可能希望将"customerid"显式传递给r作为匹配变量。我认为,几乎总是最好显式地声明要合并的标识符;如果输入数据发生意外更改,则更安全。帧在以后更改时更容易读取。

您可以通过给by一个向量(例如,by = c("CustomerId","OrderId")来合并多个列。

如果要合并的列名称不相同,可以指定,例如,by.x ="CustomerId_in_df1", by.y ="CustomerId_in_df2",其中CustomerId_in_df1是第一个数据帧中列的名称,CustomerId_in_df2是第二个数据帧中列的名称。(如果需要在多个列上合并,这些向量也可以是向量。)


我建议您查看Gabor Grothendieck的sqldf包,它允许您在SQL中表示这些操作。

1
2
3
4
5
6
7
8
9
10
11
library(sqldf)

## inner join
df3 <- sqldf("SELECT CustomerId, Product, State
              FROM df1
              JOIN df2 USING(CustomerID)")

## left join (substitute 'right' for right join)
df4 <- sqldf("SELECT CustomerId, Product, State
              FROM df1
              LEFT JOIN df2 USING(CustomerID)")

我发现SQL语法比它的R等价物更简单、更自然(但这可能只是反映了我的RDBMS偏好)。

有关联接的详细信息,请参阅Gabor的sqldf github。


内部连接有data.table方法,它非常节省时间和内存(对于一些较大的data.frames来说是必需的):

1
2
3
4
5
6
library(data.table)

dt1 <- data.table(df1, key ="CustomerId")
dt2 <- data.table(df2, key ="CustomerId")

joined.dt1.dt.2 <- dt1[dt2]

merge也适用于data.tables(因为它是通用的,称为merge.data.table)

1
merge(dt1, dt2)

stackoverflow上记录的data.table:如何执行data.table合并操作将外键上的SQL联接转换为r data.table语法合并更大数据的有效替代方案.frames r如何使用r中的data.table执行基本的左外部联接?

另一种选择是plyr包中的join函数

1
2
3
4
5
6
7
8
9
library(plyr)

join(df1, df2,
     type ="inner")

#   CustomerId Product   State
# 1          2 Toaster Alabama
# 2          4   Radio Alabama
# 3          6   Radio    Ohio

type期权:innerleftrightfull

来自?join:与merge不同,无论使用何种连接类型,join都保留x的顺序。


你可以做个好用joins哈德利威克姆的可怕的dplyr包。 P / < >

1
2
3
4
5
6
library(dplyr)

#make sure that CustomerId cols are both type numeric
#they ARE not using the provided code in question and dplyr will complain
df1$CustomerId <- as.numeric(df1$CustomerId)
df2$CustomerId <- as.numeric(df2$CustomerId)

mutating joins:给columns到df1用matches在df2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#inner
inner_join(df1, df2)

#left outer
left_join(df1, df2)

#right outer
right_join(df1, df2)

#alternate right outer
left_join(df2, df1)

#full join
full_join(df1, df2)

filtering joins:过滤器了rows在df1,不要modify columns

1
2
semi_join(df1, df2) #keep only observations in df1 that match in df2.
anti_join(df1, df2) #drops all observations in df1 that match in df2.


在R wiki上有一些这样做的好例子。我要在这里偷一对:

合并方法

由于您的键的名称相同,进行内部联接的简短方法是merge():

1
merge(df1,df2)

可以使用"all"关键字创建完整的内部联接(两个表中的所有记录):

1
merge(df1,df2, all=TRUE)

DF1和DF2的左外部联接:

1
merge(df1,df2, all.x=TRUE)

DF1和DF2的右外部连接:

1
merge(df1,df2, all.y=TRUE)

你可以把它们翻过来,拍打它们,然后把它们揉下来,得到你问的另外两个外部连接:)

下标法

使用下标方法在左侧使用df1的左外部联接为:

1
df1[,"State"]<-df2[df1[ ,"Product"],"State"]

通过咀嚼左外部联接下标示例,可以创建其他外部联接组合。(是的,我知道这相当于说"我会把它留给读者做练习…")


在在2014: P / < >

尤其是如果你再也interested在数据manipulation在一般(包括sorting,filtering,subsetting summarizing,等。),你应该把绝对是在看dplyr,这是与品种的所有功能designed到你的工作,具体来说facilitate与数据frames和某些其他的数据库类型。它甚至提供一个很elaborate SQL的功能和接口,即使到convert(现在)的SQL代码直接到R。 P / < >

"四joining相关功能在dplyr包是(到quote): P / < >

  • inner_join(x, y, by = NULL, copy = FALSE, ...):返回所有rows从 哪里有X或Y值在,和所有的columns从X和Y
  • left_join(x, y, by = NULL, copy = FALSE, ...):返回所有rows从X,和所有的columns从X和Y
  • semi_join(x, y, by = NULL, copy = FALSE, ...):返回所有rows从X那里有值或在 Y,只会让columns从X。
  • anti_join(x, y, by = NULL, copy = FALSE, ...):返回所有从X rows 在有或没有在Y值,只会让columns从X

在这里在大瀑布的细节。 P / < >

selecting columns不能做的select(df,"column")。如果这是不足够的ISH的SQL *你* *,然后sql()功能中,你可以输入SQL代码-号,和它将做specified操作上的你就像你是writing在R的所有沿(更多信息,请参阅 dplyr / databases vignette)。例如,如果应用correctly,sql("SELECT * FROM hflights")将选择所有的columns从"hflights dplyr表"("tbl")。 P / < >


更新data.table方法以加入数据集。有关每种连接类型,请参见下面的示例。有两种方法,一种是从[.data.table向subset传递第二个data.table作为第一个参数,另一种方法是使用merge函数,该函数发送到fast data.table方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
df1 = data.frame(CustomerId = c(1:6), Product = c(rep("Toaster", 3), rep("Radio", 3)))
df2 = data.frame(CustomerId = c(2L, 4L, 7L), State = c(rep("Alabama", 2), rep("Ohio", 1))) # one value changed to show full outer join

library(data.table)

dt1 = as.data.table(df1)
dt2 = as.data.table(df2)
setkey(dt1, CustomerId)
setkey(dt2, CustomerId)
# right outer join keyed data.tables
dt1[dt2]

setkey(dt1, NULL)
setkey(dt2, NULL)
# right outer join unkeyed data.tables - use `on` argument
dt1[dt2, on ="CustomerId"]

# left outer join - swap dt1 with dt2
dt2[dt1, on ="CustomerId"]

# inner join - use `nomatch` argument
dt1[dt2, nomatch=NULL, on ="CustomerId"]

# anti join - use `!` operator
dt1[!dt2, on ="CustomerId"]

# inner join - using merge method
merge(dt1, dt2, by ="CustomerId")

# full outer join
merge(dt1, dt2, by ="CustomerId", all = TRUE)

# see ?merge.data.table arguments for other cases

在基准测试的下面是base r、sqldf、dplyr和data.table。基准测试未索引/未索引的数据集。基准测试是在50M-1行数据集上执行的,在连接列上有50M-2个公共值,因此可以测试每个场景(内部、左侧、右侧、完整),并且连接仍然是不容易执行的。它是一种很好地强调连接算法的连接类型。时间安排从sqldf:0.4.11dplyr:0.7.8data.table:1.12.0开始。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# inner
Unit: seconds
   expr       min        lq      mean    median        uq       max neval
   base 111.66266 111.66266 111.66266 111.66266 111.66266 111.66266     1
  sqldf 624.88388 624.88388 624.88388 624.88388 624.88388 624.88388     1
  dplyr  51.91233  51.91233  51.91233  51.91233  51.91233  51.91233     1
     DT  10.40552  10.40552  10.40552  10.40552  10.40552  10.40552     1
# left
Unit: seconds
   expr        min         lq       mean     median         uq        max
   base 142.782030 142.782030 142.782030 142.782030 142.782030 142.782030    
  sqldf 613.917109 613.917109 613.917109 613.917109 613.917109 613.917109    
  dplyr  49.711912  49.711912  49.711912  49.711912  49.711912  49.711912    
     DT   9.674348   9.674348   9.674348   9.674348   9.674348   9.674348      
# right
Unit: seconds
   expr        min         lq       mean     median         uq        max
   base 122.366301 122.366301 122.366301 122.366301 122.366301 122.366301    
  sqldf 611.119157 611.119157 611.119157 611.119157 611.119157 611.119157    
  dplyr  50.384841  50.384841  50.384841  50.384841  50.384841  50.384841    
     DT   9.899145   9.899145   9.899145   9.899145   9.899145   9.899145    
# full
Unit: seconds
  expr       min        lq      mean    median        uq       max neval
  base 141.79464 141.79464 141.79464 141.79464 141.79464 141.79464     1
 dplyr  94.66436  94.66436  94.66436  94.66436  94.66436  94.66436     1
    DT  21.62573  21.62573  21.62573  21.62573  21.62573  21.62573     1

请注意,您可以使用data.table执行其他类型的联接:-在联接时更新-如果要从另一个表到主表查找值-在联接上聚合-如果要在键上聚合,则不必具体化所有联接结果。-重叠联接-如果要按范围合并-滚动联接-如果希望合并能够通过向前或向后滚动行来与前/后行中的值匹配-非相等联接-如果联接条件不相等

要复制的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
library(microbenchmark)
library(sqldf)
library(dplyr)
library(data.table)
sapply(c("sqldf","dplyr","data.table"), packageVersion, simplify=FALSE)

n = 5e7
set.seed(108)
df1 = data.frame(x=sample(n,n-1L), y1=rnorm(n-1L))
df2 = data.frame(x=sample(n,n-1L), y2=rnorm(n-1L))
dt1 = as.data.table(df1)
dt2 = as.data.table(df2)

mb = list()
# inner join
microbenchmark(times = 1L,
               base = merge(df1, df2, by ="x"),
               sqldf = sqldf("SELECT * FROM df1 INNER JOIN df2 ON df1.x = df2.x"),
               dplyr = inner_join(df1, df2, by ="x"),
               DT = dt1[dt2, nomatch=NULL, on ="x"]) -> mb$inner

# left outer join
microbenchmark(times = 1L,
               base = merge(df1, df2, by ="x", all.x = TRUE),
               sqldf = sqldf("SELECT * FROM df1 LEFT OUTER JOIN df2 ON df1.x = df2.x"),
               dplyr = left_join(df1, df2, by = c("x"="x")),
               DT = dt2[dt1, on ="x"]) -> mb$left

# right outer join
microbenchmark(times = 1L,
               base = merge(df1, df2, by ="x", all.y = TRUE),
               sqldf = sqldf("SELECT * FROM df2 LEFT OUTER JOIN df1 ON df2.x = df1.x"),
               dplyr = right_join(df1, df2, by ="x"),
               DT = dt1[dt2, on ="x"]) -> mb$right

# full outer join
microbenchmark(times = 1L,
               base = merge(df1, df2, by ="x", all = TRUE),
               dplyr = full_join(df1, df2, by ="x"),
               DT = merge(dt1, dt2, by ="x", all = TRUE)) -> mb$full

lapply(mb, print) -> nul


由于dplyr 0.4 implemented所有这些joins包括外_加入,但它是值得noting,为第一的几releases它用不到一个_加入收购要约,和作为一个结果,那里是一个很多真的坏hacky workaround用户代码为浮动在很一段时间(你仍然能找到这在SO和kaggle回答从那period)。 P / < >

加入相关的释放highlights: P / < >

v0.5(6 / 2016) P / < >

  • 处理为posixct型,timezones,duplicates,不同的因素levels。warnings和更好的错误造成的。
  • 在argument suffix to控制什么suffix duplicated可变名称的接收(# 1296)

v0.4.0(1 / 2015) P / < >

  • implement正确的加入和加入一个(# 96)
  • mutating joins,这给新变量的描述到一个表或从rows在另一个。filtering joins,过滤器,observations从一个表基于他们是否或不匹配的一个观测在其他表。

v0.3(10 / 14) P / < >

  • 现在可以离开_加入由不同的变量的描述在每个表:df1 % % >左_加入(df2,C("var1"="var2"))

v0.2(5/14) P / < >

  • * _(加入)不再reorders柱过程中的名称(# 324)

v0.1.3(4 / 14) P / < >

  • 胃内_加入,离开了_加入,半_加入,加入反_
  • 一个_加入不implemented呢,fallback也使用基地merge(::)(或:plyr:加入())
  • ??还implement对_加入和外_加入
  • 哈德利mentioning其他advantages睾丸
  • 一个次要的特征merge目前腹部,dplyr并不太的能力要有separate by.x,by.y columns年代如Python pandas并。

每workarounds哈德利的评论在那个地点: P / < >

  • 对_加入(X,Y)也同样在左_加入(Y,X)在词汇表中,rows,只是"columns将不同的命令。易与周围的工作,选择(在_柱过程中_秩序)
  • 一个_也加入basically联盟(左_加入(x,y),对_加入(x,y))-公元前维护所有的数据都在rows frames。


在joining两个数据与frames ~ 1万rows的每一个,与2 columns和其他与20 ~,我已经找到了surprisingly merge(..., all.x = TRUE, all.y = TRUE)dplyr::full_join()然后更快。这与dplyr v0.4 P / < >

merge ~需要17秒,全_需要加入~ 65秒。 P / < >

一些食品,虽然,因为我一般违约到dplyr为manipulation tasks。 P / < >


对于具有0..*:0..1基数的左联接或具有0..1:0..*基数的右联接,可以将joiner(0..1表)中的单列直接分配到joinee(0..*表)中,从而避免创建全新的数据表。这需要将joinee中的键列匹配到joiner中,并对joiner的行进行索引+排序,以便进行分配。好的。

如果键是单列,那么我们可以使用对match()的单个调用来进行匹配。我将在这个答案中讨论这个问题。好的。

这里有一个基于op的例子,除了我在df2中添加了一个额外的行,ID为7,以测试joiner中不匹配的键的情况。这实际上是df1左接df2:好的。

1
2
3
4
5
6
7
8
9
10
11
df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L)));
df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas'));
df1[names(df2)[-1L]] <- df2[match(df1[,1L],df2[,1L]),-1L];
df1;
##   CustomerId Product   State
## 1          1 Toaster    <NA>
## 2          2 Toaster Alabama
## 3          3 Toaster    <NA>
## 4          4   Radio Alabama
## 5          5   Radio    <NA>
## 6          6   Radio    Ohio

在上面,我硬编码了一个假设,即键列是两个输入表的第一列。我认为,一般来说,这不是一个不合理的假设,因为如果您有一个带有键列的data.frame,那么如果它从一开始就没有被设置为data.frame的第一列,这将是很奇怪的。你可以对列进行重新排序。这种假设的一个有利结果是,键列的名称不必硬编码,尽管我认为它只是用另一个假设替换了一个假设。简洁是整数索引的另一个优点,也是速度快。在下面的基准测试中,我将更改实现以使用字符串名称索引来匹配竞争的实现。好的。

我认为,如果您有几个表要针对单个大表进行左联接,那么这是一个特别合适的解决方案。为每次合并重复重建整个表是不必要的,而且效率低下。好的。

另一方面,如果您因任何原因需要通过此操作使被联接者保持不变,则不能使用此解决方案,因为它直接修改被联接者。尽管在这种情况下,您可以简单地制作一份副本,并在副本上执行就地分配。好的。

作为旁注,我简要研究了多列键可能的匹配解决方案。不幸的是,我找到的唯一匹配的解决方案是:好的。

  • 效率低下的连接。例如,match(interaction(df1$a,df1$b),interaction(df2$a,df2$b))或与paste()相同的想法。
  • 无效的笛卡尔连词,如outer(df1$a,df2$a,`==`) & outer(df1$b,df2$b,`==`)
  • base r-merge()和等价的基于包的合并函数,它们总是分配一个新表来返回合并结果,因此不适用于基于就地分配的解决方案。

例如,请参见匹配不同数据帧上的多个列并获取其他列,将两个列与其他两个列匹配,在多个列上匹配,以及我最初提出就地解决方案的这个问题的重复,在R中组合两个行数不同的数据帧。好的。标杆管理

我决定自己做基准测试,看看就地分配方法与这个问题中提供的其他解决方案相比如何。好的。

测试代码:好的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
library(microbenchmark);
library(data.table);
library(sqldf);
library(plyr);
library(dplyr);

solSpecs <- list(
    merge=list(testFuncs=list(
        inner=function(df1,df2,key) merge(df1,df2,key),
        left =function(df1,df2,key) merge(df1,df2,key,all.x=T),
        right=function(df1,df2,key) merge(df1,df2,key,all.y=T),
        full =function(df1,df2,key) merge(df1,df2,key,all=T)
    )),
    data.table.unkeyed=list(argSpec='data.table.unkeyed',testFuncs=list(
        inner=function(dt1,dt2,key) dt1[dt2,on=key,nomatch=0L,allow.cartesian=T],
        left =function(dt1,dt2,key) dt2[dt1,on=key,allow.cartesian=T],
        right=function(dt1,dt2,key) dt1[dt2,on=key,allow.cartesian=T],
        full =function(dt1,dt2,key) merge(dt1,dt2,key,all=T,allow.cartesian=T) ## calls merge.data.table()
    )),
    data.table.keyed=list(argSpec='data.table.keyed',testFuncs=list(
        inner=function(dt1,dt2) dt1[dt2,nomatch=0L,allow.cartesian=T],
        left =function(dt1,dt2) dt2[dt1,allow.cartesian=T],
        right=function(dt1,dt2) dt1[dt2,allow.cartesian=T],
        full =function(dt1,dt2) merge(dt1,dt2,all=T,allow.cartesian=T) ## calls merge.data.table()
    )),
    sqldf.unindexed=list(testFuncs=list( ## note: must pass connection=NULL to avoid running against the live DB connection, which would result in collisions with the residual tables from the last query upload
        inner=function(df1,df2,key) sqldf(paste0('select * from df1 inner join df2 using(',paste(collapse=',',key),')'),connection=NULL),
        left =function(df1,df2,key) sqldf(paste0('select * from df1 left join df2 using(',paste(collapse=',',key),')'),connection=NULL),
        right=function(df1,df2,key) sqldf(paste0('select * from df2 left join df1 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do right join proper, not yet supported; inverted left join is equivalent
        ##full =function(df1,df2,key) sqldf(paste0('select * from df1 full join df2 using(',paste(collapse=',',key),')'),connection=NULL) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
    )),
    sqldf.indexed=list(testFuncs=list( ## important: requires an active DB connection with preindexed main.df1 and main.df2 ready to go; arguments are actually ignored
        inner=function(df1,df2,key) sqldf(paste0('select * from main.df1 inner join main.df2 using(',paste(collapse=',',key),')')),
        left =function(df1,df2,key) sqldf(paste0('select * from main.df1 left join main.df2 using(',paste(collapse=',',key),')')),
        right=function(df1,df2,key) sqldf(paste0('select * from main.df2 left join main.df1 using(',paste(collapse=',',key),')')) ## can't do right join proper, not yet supported; inverted left join is equivalent
        ##full =function(df1,df2,key) sqldf(paste0('select * from main.df1 full join main.df2 using(',paste(collapse=',',key),')')) ## can't do full join proper, not yet supported; possible to hack it with a union of left joins, but too unreasonable to include in testing
    )),
    plyr=list(testFuncs=list(
        inner=function(df1,df2,key) join(df1,df2,key,'inner'),
        left =function(df1,df2,key) join(df1,df2,key,'left'),
        right=function(df1,df2,key) join(df1,df2,key,'right'),
        full =function(df1,df2,key) join(df1,df2,key,'full')
    )),
    dplyr=list(testFuncs=list(
        inner=function(df1,df2,key) inner_join(df1,df2,key),
        left =function(df1,df2,key) left_join(df1,df2,key),
        right=function(df1,df2,key) right_join(df1,df2,key),
        full =function(df1,df2,key) full_join(df1,df2,key)
    )),
    in.place=list(testFuncs=list(
        left =function(df1,df2,key) { cns <- setdiff(names(df2),key); df1[cns] <- df2[match(df1[,key],df2[,key]),cns]; df1; },
        right=function(df1,df2,key) { cns <- setdiff(names(df1),key); df2[cns] <- df1[match(df2[,key],df1[,key]),cns]; df2; }
    ))
);

getSolTypes <- function() names(solSpecs);
getJoinTypes <- function() unique(unlist(lapply(solSpecs,function(x) names(x$testFuncs))));
getArgSpec <- function(argSpecs,key=NULL) if (is.null(key)) argSpecs$default else argSpecs[[key]];

initSqldf <- function() {
    sqldf(); ## creates sqlite connection on first run, cleans up and closes existing connection otherwise
    if (exists('sqldfInitFlag',envir=globalenv(),inherits=F) && sqldfInitFlag) { ## false only on first run
        sqldf(); ## creates a new connection
    } else {
        assign('sqldfInitFlag',T,envir=globalenv()); ## set to true for the one and only time
    }; ## end if
    invisible();
}; ## end initSqldf()

setUpBenchmarkCall <- function(argSpecs,joinType,solTypes=getSolTypes(),env=parent.frame()) {
    ## builds and returns a list of expressions suitable for passing to the list argument of microbenchmark(), and assigns variables to resolve symbol references in those expressions
    callExpressions <- list();
    nms <- character();
    for (solType in solTypes) {
        testFunc <- solSpecs[[solType]]$testFuncs[[joinType]];
        if (is.null(testFunc)) next; ## this join type is not defined for this solution type
        testFuncName <- paste0('tf.',solType);
        assign(testFuncName,testFunc,envir=env);
        argSpecKey <- solSpecs[[solType]]$argSpec;
        argSpec <- getArgSpec(argSpecs,argSpecKey);
        argList <- setNames(nm=names(argSpec$args),vector('list',length(argSpec$args)));
        for (i in seq_along(argSpec$args)) {
            argName <- paste0('tfa.',argSpecKey,i);
            assign(argName,argSpec$args[[i]],envir=env);
            argList[[i]] <- if (i%in%argSpec$copySpec) call('copy',as.symbol(argName)) else as.symbol(argName);
        }; ## end for
        callExpressions[[length(callExpressions)+1L]] <- do.call(call,c(list(testFuncName),argList),quote=T);
        nms[length(nms)+1L] <- solType;
    }; ## end for
    names(callExpressions) <- nms;
    callExpressions;
}; ## end setUpBenchmarkCall()

harmonize <- function(res) {
    res <- as.data.frame(res); ## coerce to data.frame
    for (ci in which(sapply(res,is.factor))) res[[ci]] <- as.character(res[[ci]]); ## coerce factor columns to character
    for (ci in which(sapply(res,is.logical))) res[[ci]] <- as.integer(res[[ci]]); ## coerce logical columns to integer (works around sqldf quirk of munging logicals to integers)
    ##for (ci in which(sapply(res,inherits,'POSIXct'))) res[[ci]] <- as.double(res[[ci]]); ## coerce POSIXct columns to double (works around sqldf quirk of losing POSIXct class) ----- POSIXct doesn't work at all in sqldf.indexed
    res <- res[order(names(res))]; ## order columns
    res <- res[do.call(order,res),]; ## order rows
    res;
}; ## end harmonize()

checkIdentical <- function(argSpecs,solTypes=getSolTypes()) {
    for (joinType in getJoinTypes()) {
        callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
        if (length(callExpressions)<2L) next;
        ex <- harmonize(eval(callExpressions[[1L]]));
        for (i in seq(2L,len=length(callExpressions)-1L)) {
            y <- harmonize(eval(callExpressions[[i]]));
            if (!isTRUE(all.equal(ex,y,check.attributes=F))) {
                ex <<- ex;
                y <<- y;
                solType <- names(callExpressions)[i];
                stop(paste0('non-identical: ',solType,' ',joinType,'.'));
            }; ## end if
        }; ## end for
    }; ## end for
    invisible();
}; ## end checkIdentical()

testJoinType <- function(argSpecs,joinType,solTypes=getSolTypes(),metric=NULL,times=100L) {
    callExpressions <- setUpBenchmarkCall(argSpecs,joinType,solTypes);
    bm <- microbenchmark(list=callExpressions,times=times);
    if (is.null(metric)) return(bm);
    bm <- summary(bm);
    res <- setNames(nm=names(callExpressions),bm[[metric]]);
    attr(res,'unit') <- attr(bm,'unit');
    res;
}; ## end testJoinType()

testAllJoinTypes <- function(argSpecs,solTypes=getSolTypes(),metric=NULL,times=100L) {
    joinTypes <- getJoinTypes();
    resList <- setNames(nm=joinTypes,lapply(joinTypes,function(joinType) testJoinType(argSpecs,joinType,solTypes,metric,times)));
    if (is.null(metric)) return(resList);
    units <- unname(unlist(lapply(resList,attr,'unit')));
    res <- do.call(data.frame,c(list(join=joinTypes),setNames(nm=solTypes,rep(list(rep(NA_real_,length(joinTypes))),length(solTypes))),list(unit=units,stringsAsFactors=F)));
    for (i in seq_along(resList)) res[i,match(names(resList[[i]]),names(res))] <- resList[[i]];
    res;
}; ## end testAllJoinTypes()

testGrid <- function(makeArgSpecsFunc,sizes,overlaps,solTypes=getSolTypes(),joinTypes=getJoinTypes(),metric='median',times=100L) {

    res <- expand.grid(size=sizes,overlap=overlaps,joinType=joinTypes,stringsAsFactors=F);
    res[solTypes] <- NA_real_;
    res$unit <- NA_character_;
    for (ri in seq_len(nrow(res))) {

        size <- res$size[ri];
        overlap <- res$overlap[ri];
        joinType <- res$joinType[ri];

        argSpecs <- makeArgSpecsFunc(size,overlap);

        checkIdentical(argSpecs,solTypes);

        cur <- testJoinType(argSpecs,joinType,solTypes,metric,times);
        res[ri,match(names(cur),names(res))] <- cur;
        res$unit[ri] <- attr(cur,'unit');

    }; ## end for

    res;

}; ## end testGrid()

下面是基于前面演示的OP的示例基准:好的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
## OP's example, supplemented with a non-matching row in df2
argSpecs <- list(
    default=list(copySpec=1:2,args=list(
        df1 <- data.frame(CustomerId=1:6,Product=c(rep('Toaster',3L),rep('Radio',3L))),
        df2 <- data.frame(CustomerId=c(2L,4L,6L,7L),State=c(rep('Alabama',2L),'Ohio','Texas')),
        'CustomerId'
    )),
    data.table.unkeyed=list(copySpec=1:2,args=list(
        as.data.table(df1),
        as.data.table(df2),
        'CustomerId'
    )),
    data.table.keyed=list(copySpec=1:2,args=list(
        setkey(as.data.table(df1),CustomerId),
        setkey(as.data.table(df2),CustomerId)
    ))
);
## prepare sqldf
initSqldf();
sqldf('create index df1_key on df1(CustomerId);'); ## upload and create an sqlite index on df1
sqldf('create index df2_key on df2(CustomerId);'); ## upload and create an sqlite index on df2

checkIdentical(argSpecs);

testAllJoinTypes(argSpecs,metric='median');
##    join    merge data.table.unkeyed data.table.keyed sqldf.unindexed sqldf.indexed      plyr    dplyr in.place         unit
## 1 inner  644.259           861.9345          923.516        9157.752      1580.390  959.2250 270.9190       NA microseconds
## 2  left  713.539           888.0205          910.045        8820.334      1529.714  968.4195 270.9185 224.3045 microseconds
## 3 right 1221.804           909.1900          923.944        8930.668      1533.135 1063.7860 269.8495 218.1035 microseconds
## 4  full 1302.203          3107.5380         3184.729              NA            NA 1593.6475 270.7055       NA microseconds

在这里,我对随机输入数据进行基准测试,尝试在两个输入表之间使用不同的比例和不同的键重叠模式。这个基准仍然局限于单列整数键的情况。此外,为了确保就地解决方案对同一个表的左联接和右联接都有效,所有随机测试数据都使用0..1:0..1基数。这是通过在生成第二个data.frame的键列时不替换第一个data.frame的键列进行采样来实现的。好的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
makeArgSpecs.singleIntegerKey.optionalOneToOne <- function(size,overlap) {

    com <- as.integer(size*overlap);

    argSpecs <- list(
        default=list(copySpec=1:2,args=list(
            df1 <- data.frame(id=sample(size),y1=rnorm(size),y2=rnorm(size)),
            df2 <- data.frame(id=sample(c(if (com>0L) sample(df1$id,com) else integer(),seq(size+1L,len=size-com))),y3=rnorm(size),y4=rnorm(size)),
            'id'
        )),
        data.table.unkeyed=list(copySpec=1:2,args=list(
            as.data.table(df1),
            as.data.table(df2),
            'id'
        )),
        data.table.keyed=list(copySpec=1:2,args=list(
            setkey(as.data.table(df1),id),
            setkey(as.data.table(df2),id)
        ))
    );
    ## prepare sqldf
    initSqldf();
    sqldf('create index df1_key on df1(id);'); ## upload and create an sqlite index on df1
    sqldf('create index df2_key on df2(id);'); ## upload and create an sqlite index on df2

    argSpecs;

}; ## end makeArgSpecs.singleIntegerKey.optionalOneToOne()

## cross of various input sizes and key overlaps
sizes <- c(1e1L,1e3L,1e6L);
overlaps <- c(0.99,0.5,0.01);
system.time({ res <- testGrid(makeArgSpecs.singleIntegerKey.optionalOneToOne,sizes,overlaps); });
##     user   system  elapsed
## 22024.65 12308.63 34493.19

我编写了一些代码来创建上面结果的日志图。我为每个重叠百分比生成了一个单独的图。它有点混乱,但我喜欢在同一个图中表示所有的解决方案类型和连接类型。好的。

我使用样条插值来显示每个解决方案/连接类型组合的平滑曲线,用单个PCH符号绘制。连接类型由PCH符号捕获,使用点表示内部、左和右角括号表示左和右,菱形表示完全。解决方案类型由颜色捕获,如图例所示。好的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
plotRes <- function(res,titleFunc,useFloor=F) {
    solTypes <- setdiff(names(res),c('size','overlap','joinType','unit')); ## derive from res
    normMult <- c(microseconds=1e-3,milliseconds=1); ## normalize to milliseconds
    joinTypes <- getJoinTypes();
    cols <- c(merge='purple',data.table.unkeyed='blue',data.table.keyed='#00DDDD',sqldf.unindexed='brown',sqldf.indexed='orange',plyr='red',dplyr='#00BB00',in.place='magenta');
    pchs <- list(inner=20L,left='<',right='>',full=23L);
    cexs <- c(inner=0.7,left=1,right=1,full=0.7);
    NP <- 60L;
    ord <- order(decreasing=T,colMeans(res[res$size==max(res$size),solTypes],na.rm=T));
    ymajors <- data.frame(y=c(1,1e3),label=c('1ms','1s'),stringsAsFactors=F);
    for (overlap in unique(res$overlap)) {
        x1 <- res[res$overlap==overlap,];
        x1[solTypes] <- x1[solTypes]*normMult[x1$unit]; x1$unit <- NULL;
        xlim <- c(1e1,max(x1$size));
        xticks <- 10^seq(log10(xlim[1L]),log10(xlim[2L]));
        ylim <- c(1e-1,10^((if (useFloor) floor else ceiling)(log10(max(x1[solTypes],na.rm=T))))); ## use floor() to zoom in a little more, only sqldf.unindexed will break above, but xpd=NA will keep it visible
        yticks <- 10^seq(log10(ylim[1L]),log10(ylim[2L]));
        yticks.minor <- rep(yticks[-length(yticks)],each=9L)*1:9;
        plot(NA,xlim=xlim,ylim=ylim,xaxs='i',yaxs='i',axes=F,xlab='size (rows)',ylab='time (ms)',log='xy');
        abline(v=xticks,col='lightgrey');
        abline(h=yticks.minor,col='lightgrey',lty=3L);
        abline(h=yticks,col='lightgrey');
        axis(1L,xticks,parse(text=sprintf('10^%d',as.integer(log10(xticks)))));
        axis(2L,yticks,parse(text=sprintf('10^%d',as.integer(log10(yticks)))),las=1L);
        axis(4L,ymajors$y,ymajors$label,las=1L,tick=F,cex.axis=0.7,hadj=0.5);
        for (joinType in rev(joinTypes)) { ## reverse to draw full first, since it's larger and would be more obtrusive if drawn last
            x2 <- x1[x1$joinType==joinType,];
            for (solType in solTypes) {
                if (any(!is.na(x2[[solType]]))) {
                    xy <- spline(x2$size,x2[[solType]],xout=10^(seq(log10(x2$size[1L]),log10(x2$size[nrow(x2)]),len=NP)));
                    points(xy$x,xy$y,pch=pchs[[joinType]],col=cols[solType],cex=cexs[joinType],xpd=NA);
                }; ## end if
            }; ## end for
        }; ## end for
        ## custom legend
        ## due to logarithmic skew, must do all distance calcs in inches, and convert to user coords afterward
        ## the bottom-left corner of the legend will be defined in normalized figure coords, although we can convert to inches immediately
        leg.cex <- 0.7;
        leg.x.in <- grconvertX(0.275,'nfc','in');
        leg.y.in <- grconvertY(0.6,'nfc','in');
        leg.x.user <- grconvertX(leg.x.in,'in');
        leg.y.user <- grconvertY(leg.y.in,'in');
        leg.outpad.w.in <- 0.1;
        leg.outpad.h.in <- 0.1;
        leg.midpad.w.in <- 0.1;
        leg.midpad.h.in <- 0.1;
        leg.sol.w.in <- max(strwidth(solTypes,'in',leg.cex));
        leg.sol.h.in <- max(strheight(solTypes,'in',leg.cex))*1.5; ## multiplication factor for greater line height
        leg.join.w.in <- max(strheight(joinTypes,'in',leg.cex))*1.5; ## ditto
        leg.join.h.in <- max(strwidth(joinTypes,'in',leg.cex));
        leg.main.w.in <- leg.join.w.in*length(joinTypes);
        leg.main.h.in <- leg.sol.h.in*length(solTypes);
        leg.x2.user <- grconvertX(leg.x.in+leg.outpad.w.in*2+leg.main.w.in+leg.midpad.w.in+leg.sol.w.in,'in');
        leg.y2.user <- grconvertY(leg.y.in+leg.outpad.h.in*2+leg.main.h.in+leg.midpad.h.in+leg.join.h.in,'in');
        leg.cols.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.join.w.in*(0.5+seq(0L,length(joinTypes)-1L)),'in');
        leg.lines.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in-leg.sol.h.in*(0.5+seq(0L,length(solTypes)-1L)),'in');
        leg.sol.x.user <- grconvertX(leg.x.in+leg.outpad.w.in+leg.main.w.in+leg.midpad.w.in,'in');
        leg.join.y.user <- grconvertY(leg.y.in+leg.outpad.h.in+leg.main.h.in+leg.midpad.h.in,'in');
        rect(leg.x.user,leg.y.user,leg.x2.user,leg.y2.user,col='white');
        text(leg.sol.x.user,leg.lines.y.user,solTypes[ord],cex=leg.cex,pos=4L,offset=0);
        text(leg.cols.x.user,leg.join.y.user,joinTypes,cex=leg.cex,pos=4L,offset=0,srt=90); ## srt rotation applies *after* pos/offset positioning
        for (i in seq_along(joinTypes)) {
            joinType <- joinTypes[i];
            points(rep(leg.cols.x.user[i],length(solTypes)),ifelse(colSums(!is.na(x1[x1$joinType==joinType,solTypes[ord]]))==0L,NA,leg.lines.y.user),pch=pchs[[joinType]],col=cols[solTypes[ord]]);
        }; ## end for
        title(titleFunc(overlap));
        readline(sprintf('overlap %.02f',overlap));
    }; ## end for
}; ## end plotRes()

titleFunc <- function(overlap) sprintf('R merge solutions: single-column integer key, 0..1:0..1 cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,T);

R-merge-benchmark-single-column-integer-key-optional-one-to-one-99好的。

R-merge-benchmark-single-column-integer-key-optional-one-to-one-50好的。

R-merge-benchmark-single-column-integer-key-optional-one-to-one-1好的。

这是第二个更为繁重的大型基准,涉及键列的数量和类型以及基数。对于这个基准,我使用三个键列:一个字符、一个整数和一个逻辑列,对基数没有限制(即0..*:0..*)。(一般来说,由于浮点比较的复杂性,不建议定义具有双精度或复杂值的键列,基本上没有人使用原始类型,更不用说键列了,所以我没有将这些类型包括在键列中。另外,为了便于参考,我最初尝试使用四个键列,包括一个posixct键列,但posixct类型由于某些原因(可能是由于浮点比较异常)无法很好地使用sqldf.indexed解决方案,因此我将其删除。)好的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
makeArgSpecs.assortedKey.optionalManyToMany <- function(size,overlap,uniquePct=75) {

    ## number of unique keys in df1
    u1Size <- as.integer(size*uniquePct/100);

    ## (roughly) divide u1Size into bases, so we can use expand.grid() to produce the required number of unique key values with repetitions within individual key columns
    ## use ceiling() to ensure we cover u1Size; will truncate afterward
    u1SizePerKeyColumn <- as.integer(ceiling(u1Size^(1/3)));

    ## generate the unique key values for df1
    keys1 <- expand.grid(stringsAsFactors=F,
        idCharacter=replicate(u1SizePerKeyColumn,paste(collapse='',sample(letters,sample(4:12,1L),T))),
        idInteger=sample(u1SizePerKeyColumn),
        idLogical=sample(c(F,T),u1SizePerKeyColumn,T)
        ##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+sample(u1SizePerKeyColumn)
    )[seq_len(u1Size),];

    ## rbind some repetitions of the unique keys; this will prepare one side of the many-to-many relationship
    ## also scramble the order afterward
    keys1 <- rbind(keys1,keys1[sample(nrow(keys1),size-u1Size,T),])[sample(size),];

    ## common and unilateral key counts
    com <- as.integer(size*overlap);
    uni <- size-com;

    ## generate some unilateral keys for df2 by synthesizing outside of the idInteger range of df1
    keys2 <- data.frame(stringsAsFactors=F,
        idCharacter=replicate(uni,paste(collapse='',sample(letters,sample(4:12,1L),T))),
        idInteger=u1SizePerKeyColumn+sample(uni),
        idLogical=sample(c(F,T),uni,T)
        ##idPOSIXct=as.POSIXct('2016-01-01 00:00:00','UTC')+u1SizePerKeyColumn+sample(uni)
    );

    ## rbind random keys from df1; this will complete the many-to-many relationship
    ## also scramble the order afterward
    keys2 <- rbind(keys2,keys1[sample(nrow(keys1),com,T),])[sample(size),];

    ##keyNames <- c('idCharacter','idInteger','idLogical','idPOSIXct');
    keyNames <- c('idCharacter','idInteger','idLogical');
    ## note: was going to use raw and complex type for two of the non-key columns, but data.table doesn't seem to fully support them
    argSpecs <- list(
        default=list(copySpec=1:2,args=list(
            df1 <- cbind(stringsAsFactors=F,keys1,y1=sample(c(F,T),size,T),y2=sample(size),y3=rnorm(size),y4=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
            df2 <- cbind(stringsAsFactors=F,keys2,y5=sample(c(F,T),size,T),y6=sample(size),y7=rnorm(size),y8=replicate(size,paste(collapse='',sample(letters,sample(4:12,1L),T)))),
            keyNames
        )),
        data.table.unkeyed=list(copySpec=1:2,args=list(
            as.data.table(df1),
            as.data.table(df2),
            keyNames
        )),
        data.table.keyed=list(copySpec=1:2,args=list(
            setkeyv(as.data.table(df1),keyNames),
            setkeyv(as.data.table(df2),keyNames)
        ))
    );
    ## prepare sqldf
    initSqldf();
    sqldf(paste0('create index df1_key on df1(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df1
    sqldf(paste0('create index df2_key on df2(',paste(collapse=',',keyNames),');')); ## upload and create an sqlite index on df2

    argSpecs;

}; ## end makeArgSpecs.assortedKey.optionalManyToMany()

sizes <- c(1e1L,1e3L,1e5L); ## 1e5L instead of 1e6L to respect more heavy-duty inputs
overlaps <- c(0.99,0.5,0.01);
solTypes <- setdiff(getSolTypes(),'in.place');
system.time({ res <- testGrid(makeArgSpecs.assortedKey.optionalManyToMany,sizes,overlaps,solTypes); });
##     user   system  elapsed
## 38895.50   784.19 39745.53

使用上面给出的相同绘图代码生成的绘图:好的。

1
2
titleFunc <- function(overlap) sprintf('R merge solutions: character/integer/logical key, 0..*:0..* cardinality, %d%% overlap',as.integer(overlap*100));
plotRes(res,titleFunc,F);

R-merge-benchmark-assorted-key-optional-many-to-many-99好的。

R-merge-benchmark-assorted-key-optional-many-to-many-50好的。

R-merge-benchmark-assorted-key-optional-many-to-many-1好的。好啊。


对于所有列的内部联接,也可以使用data.table-package中的fintersect或dplyr包中的intersect作为merge的替代,而不必指定by列。这将使两个数据帧之间的行相等:

1
2
3
4
5
6
7
8
9
10
11
12
merge(df1, df2)
#   V1 V2
# 1  B  2
# 2  C  3
dplyr::intersect(df1, df2)
#   V1 V2
# 1  B  2
# 2  C  3
data.table::fintersect(setDT(df1), setDT(df2))
#    V1 V2
# 1:  B  2
# 2:  C  3

实例数据:

1
2
df1 <- data.frame(V1 = LETTERS[1:4], V2 = 1:4)
df2 <- data.frame(V1 = LETTERS[2:3], V2 = 2:3)


  • 使用merge函数,我们可以选择左表或右表的变量,就像我们在SQL中熟悉的select语句一样(例如:select a.*或select b.*from…)
  • 我们必须添加额外的代码,这些代码将从新联接的表中进行子集。

    • SQL:—select a.* from df1 a inner join df2 b on a.CustomerId=b.CustomerId

    • R:-merge(df1, df2, by.x ="CustomerId", by.y ="CustomerId")[,names(df1)]

  • 同样方式

    • SQL:—select b.* from df1 a inner join df2 b on a.CustomerId=b.CustomerId

    • R:-merge(df1, df2, by.x ="CustomerId", by.y =
      "CustomerId")[,names(df2)]


    更新连接。另一个重要的SQL样式联接是"更新联接",其中一个表中的列使用另一个表进行更新(或创建)。

    正在修改OP的示例表…

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    sales = data.frame(
      CustomerId = c(1, 1, 1, 3, 4, 6),
      Year = 2000:2005,
      Product = c(rep("Toaster", 3), rep("Radio", 3))
    )
    cust = data.frame(
      CustomerId = c(1, 1, 4, 6),
      Year = c(2001L, 2002L, 2002L, 2002L),
      State = state.name[1:4]
    )

    sales
    # CustomerId Year Product
    #          1 2000 Toaster
    #          1 2001 Toaster
    #          1 2002 Toaster
    #          3 2003   Radio
    #          4 2004   Radio
    #          6 2005   Radio

    cust
    # CustomerId Year    State
    #          1 2001  Alabama
    #          1 2002   Alaska
    #          4 2002  Arizona
    #          6 2002 Arkansas

    假设我们想将客户的状态从cust添加到采购表sales中,忽略年份列。使用基R,我们可以识别匹配的行,然后复制值:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    sales$State <- cust$State[ match(sales$CustomerId, cust$CustomerId) ]

    # CustomerId Year Product    State
    #          1 2000 Toaster  Alabama
    #          1 2001 Toaster  Alabama
    #          1 2002 Toaster  Alabama
    #          3 2003   Radio     <NA>
    #          4 2004   Radio  Arizona
    #          6 2005   Radio Arkansas

    # cleanup for the next example
    sales$State <- NULL

    如图所示,match从customer表中选择第一个匹配行。

    用多列更新联接。当我们只连接一个列并且对第一个匹配满意时,上面的方法工作得很好。假设我们希望客户表中的测量年份与销售年份匹配。

    正如@bgoldst的回答所提到的,matchinteraction可能是这种情况的一种选择。更直接地说,我们可以使用data.table:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    library(data.table)
    setDT(sales); setDT(cust)

    sales[, State := cust[sales, on=.(CustomerId, Year), x.State]]

    #    CustomerId Year Product   State
    # 1:          1 2000 Toaster    <NA>
    # 2:          1 2001 Toaster Alabama
    # 3:          1 2002 Toaster  Alaska
    # 4:          3 2003   Radio    <NA>
    # 5:          4 2004   Radio    <NA>
    # 6:          6 2005   Radio    <NA>

    # cleanup for next example
    sales[, State := NULL]

    正在滚动更新联接。或者,我们可能希望采用客户发现的最后一个状态:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    sales[, State := cust[sales, on=.(CustomerId, Year), roll=TRUE, x.State]]

    #    CustomerId Year Product    State
    # 1:          1 2000 Toaster     <NA>
    # 2:          1 2001 Toaster  Alabama
    # 3:          1 2002 Toaster   Alaska
    # 4:          3 2003   Radio     <NA>
    # 5:          4 2004   Radio  Arizona
    # 6:          6 2005   Radio Arkansas

    以上三个例子都集中在创建/添加新列上。有关更新/修改现有列的示例,请参阅相关的R FAQ。