gpt4 book ai didi

r - 在 R 中查找 read.csv 中不良数据的技术

转载 作者:行者123 更新时间:2023-12-02 01:13:22 24 4
gpt4 key购买 nike

我正在读取如下所示的数据文件:

userId, fullName,email,password,activated,registrationDate,locale,notifyOnUpdates,lastSyncTime,plan_id,plan_period_months,plan_price,plan_exp_date,plan_is_trial,plan_is_trial_used,q_hear,q_occupation,pp_subid,pp_payments,pp_since,pp_cancelled,apikey
"2","John Smith,"john.smith@gmail.com","a","1","2004-07-23 14:19:32","en_US","1","2011-04-07 07:29:17","3",\N,\N,\N,"0","1",\N,\N,\N,\N,\N,\N,"d7734dce-4ae2-102a-8951-0040ca38ff83"

但实际文件大约有20000条记录。我使用以下 R 代码来读取它:

user = read.csv("~/Desktop/dbdump/users.txt", na.strings = "\\N", quote="")

我有 quote="" 的原因是因为没有它导入会过早停止。我最终得到了总共 9569 个观察值。为什么我不明白为什么 quote="" 能够克服这个问题,它似乎是这样做的。

除了它引入了我必须“修复”的其他问题。我看到的第一个是日期最终是包含引号的字符串,当我对它们使用 to.Date() 时,它们不想转换为实际日期。

现在我可以修好琴弦并破解了。但最好更多地了解我在做什么。有人可以解释一下吗:

  1. 为什么quote=""修复了“坏数据”
  2. 找出导致 read.csv 提前停止的最佳实践技术是什么? (如果我只查看 +/- 指示行的输入数据,我不会发现任何问题)。

这里是“问题”“附近”的几行。我没看到损坏吗?

"16888","user1","user1@gmail.com","TeilS12","1","2008-01-19 08:47:45","en_US","0","2008-02-23 16:51:53","1",\N,\N,\N,"0","0","article","student",\N,\N,\N,\N,"ad949a8e-17ed-102b-9237-0040ca390025"
"16889","user2","user2@gmail.com","Gaspar","1","2008-01-19 10:34:11","en_US","1",\N,"1",\N,\N,\N,"0","0","email","journalist",\N,\N,\N,\N,"8b90f63a-17fc-102b-9237-0040ca390025"
"16890","user3","user3@gmail.com","boomblaadje","1","2008-01-19 14:36:54","en_US","0",\N,"1",\N,\N,\N,"0","0","article","student",\N,\N,\N,\N,"73f31f4a-181e-102b-9237-0040ca390025"
"16891","user4","user4@gmail.com","mytyty","1","2008-01-19 15:10:45","en_US","1","2008-01-19 15:16:45","1",\N,\N,\N,"0","0","google-ad","student",\N,\N,\N,\N,"2e48e308-1823-102b-9237-0040ca390025"
"16892","user5","user5@gmail.com","08091969","1","2008-01-19 15:12:50","en_US","1",\N,"1",\N,\N,\N,"0","0","dont","dont",\N,\N,\N,\N,"79051bc8-1823-102b-9237-0040ca390025"

* 更新 *

这更棘手。尽管导入的总行数是 9569,但如果我查看最后几行,它们对应于最后几行数据。因此我推测导入过程中发生了一些事情导致跳过很多行。事实上 15914 - 9569 = 6345 条记录。当我在那里输入 quote=""时,我得到 15914。

所以我的问题可以修改:有没有办法让 read.csv 报告它决定不导入的行?

*更新2*

@Dwin,我必须删除 na.strings="\N"因为 count.fields 函数不允许它。这样,我得到了这个看起来很有趣但我不明白的输出。

3     4    22    23    24 
1 83 15466 178 4

您的第二个命令会生成大量数据(并在达到 max.print 时停止。)但第一行是这样的:

[1]  2  4  2  3  5  3  3  3  5  3  3  3  2  3  4  2  3  2  2  3  2  2  4  2  4  3  5  4  3  4  3  3  3  3  3  2  4

我不明白输出是否应该显示每个输入记录中有多少个字段。显然第一行都有超过 2,4,2 等字段...感觉我越来越接近,但仍然很困惑!

最佳答案

count.fields 函数对于识别在何处查找格式错误的数据非常有用。

这给出了每行的字段列表,忽略引用,如果有嵌入的逗号,可能会出现问题:

table( count.fields("~/Desktop/dbdump/users.txt", quote="", sep=",") ) 

这给出了一个忽略引号和“#”(octothorpe)作为注释字符的表格:

table( count.fields("~/Desktop/dbdump/users.txt",  quote="", comment.char="") )

查看您为第一个表格报告的内容...其中大部分都是所需的...您可以获得具有非 22 个值的行位置列表(使用逗号和非引号设置) :

which( count.fields("~/Desktop/dbdump/users.txt", quote="", sep=",") != 22)

有时,如果唯一的困难是行尾缺少逗号,则可以使用 fill=TRUE 解决问题。

关于r - 在 R 中查找 read.csv 中不良数据的技术,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15771557/

24 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com