- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一个存储在HDFS中的Apache组合日志文件。这是前五行的示例:
123.125.67.216 - - [02/Jan/2012:00:48:27 -0800] "GET /wiki/Dekart HTTP/1.1" 200 4512 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2"
209.85.238.130 - - [02/Jan/2012:00:48:27 -0800] "GET /w/index.php?title=Special:RecentChanges&feed=atom HTTP/1.1" 304 260 "-" "Feedfetcher-Google; (+http://www.google.com/feedfetcher.html; 4 subscribers; feed-id=11568779694056348047)"
123.125.67.213 - - [02/Jan/2012:00:48:33 -0800] "GET / HTTP/1.1" 301 433 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2"
123.125.67.214 - - [02/Jan/2012:00:48:33 -0800] "GET /wiki/Main_Page HTTP/1.1" 200 8647 "-" "Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2"
grunt> raw = LOAD 'log' USING org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader AS (remoteAddr, remoteLogname, user, time, method, uri, proto, status, bytes, referer, userAgent);
16/02/15 21:39:38 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
grunt> dump raw;
162493 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
16/02/15 21:39:40 INFO pigstats.ScriptState: Pig features used in the script: UNKNOWN
16/02/15 21:39:40 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
162551 [main] WARN org.apache.pig.data.SchemaTupleBackend - SchemaTupleBackend has already been initialized
16/02/15 21:39:40 WARN data.SchemaTupleBackend: SchemaTupleBackend has already been initialized
162551 [main] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
16/02/15 21:39:40 INFO optimizer.LogicalPlanOptimizer: {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
162559 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
16/02/15 21:39:40 INFO mapReduceLayer.MRCompiler: File concatenation threshold: 100 optimistic? false
162562 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
16/02/15 21:39:40 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 1
162562 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
16/02/15 21:39:40 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 1
16/02/15 21:39:40 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
16/02/15 21:39:40 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
162586 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
16/02/15 21:39:40 INFO mapreduce.MRScriptState: Pig script settings are added to the job
162586 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
162587 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: This job cannot be converted run in-process
162611 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/lib/piggybank.jar to DistributedCache through /tmp/temp2003065886/tmp2039083441/piggybank.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/piggybank.jar to DistributedCache through /tmp/temp2003065886/tmp2039083441/piggybank.jar
162651 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/pig-0.14.0-amzn-0-core-h2.jar to DistributedCache through /tmp/temp2003065886/tmp551968774/pig-0.14.0-amzn-0-core-h2.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/pig-0.14.0-amzn-0-core-h2.jar to DistributedCache through /tmp/temp2003065886/tmp551968774/pig-0.14.0-amzn-0-core-h2.jar
162670 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp2003065886/tmp710362688/automaton-1.11-8.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp2003065886/tmp710362688/automaton-1.11-8.jar
162689 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp2003065886/tmp-1076004022/antlr-runtime-3.4.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp2003065886/tmp-1076004022/antlr-runtime-3.4.jar
162714 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/hadoop/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp2003065886/tmp1810740836/guava-11.0.2.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/hadoop/lib/guava-11.0.2.jar to DistributedCache through /tmp/temp2003065886/tmp1810740836/guava-11.0.2.jar
162737 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar to DistributedCache through /tmp/temp2003065886/tmp-1238145114/joda-time-2.8.1.jar
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Added jar file:/usr/lib/hadoop-mapreduce/joda-time-2.8.1.jar to DistributedCache through /tmp/temp2003065886/tmp-1238145114/joda-time-2.8.1.jar
162752 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
16/02/15 21:39:40 INFO mapReduceLayer.JobControlCompiler: Setting up single store job
162753 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Key [pig.schematuple] is false, will not generate code.
162753 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Starting process to move generated code to distributed cacche
162753 [main] INFO org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
16/02/15 21:39:40 INFO data.SchemaTupleFrontend: Setting key [pig.schematuple.classes] with classes to deserialize []
162776 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission.
16/02/15 21:39:40 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:40 WARN mapreduce.JobResourceUploader: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
16/02/15 21:39:40 INFO input.FileInputFormat: Total input paths to process : 1
162866 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
16/02/15 21:39:40 INFO util.MapRedUtil: Total input paths (combined) to process : 1
16/02/15 21:39:40 INFO mapreduce.JobSubmitter: number of splits:1
16/02/15 21:39:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1455560055771_0007
16/02/15 21:39:40 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/02/15 21:39:40 INFO impl.YarnClientImpl: Submitted application application_1455560055771_0007
16/02/15 21:39:40 INFO mapreduce.Job: The url to track the job: http://ip-172-31-42-90.ec2.internal:20888/proxy/application_1455560055771_0007/
163278 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1455560055771_0007
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_1455560055771_0007
163278 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases raw
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: Processing aliases raw
163278 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: raw[2,6],null[-1,-1] C: R:
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: detailed locations: M: raw[2,6],null[-1,-1] C: R:
163283 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: 0% complete
163283 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1455560055771_0007]
16/02/15 21:39:40 INFO mapReduceLayer.MapReduceLauncher: Running jobs are [job_1455560055771_0007]
177841 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
16/02/15 21:39:55 INFO mapReduceLayer.MapReduceLauncher: 50% complete
177841 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Running jobs are [job_1455560055771_0007]
16/02/15 21:39:55 INFO mapReduceLayer.MapReduceLauncher: Running jobs are [job_1455560055771_0007]
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
178506 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
16/02/15 21:39:56 INFO mapReduceLayer.MapReduceLauncher: 100% complete
178506 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.1-amzn-0 0.14.0-amzn-0 hadoop 2016-02-15 21:39:40 2016-02-15 21:39:56 UNKNOWN
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1455560055771_0007 1 0 5 5 5 5 0 0 0 0 raw MAP_ONLY hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276,
Input(s):
Successfully read 0 records (10040153 bytes) from: "hdfs://ip-172-31-42-90.ec2.internal:8020/user/hadoop/log"
Output(s):
Successfully stored 0 records in: "hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1455560055771_0007
16/02/15 21:39:56 INFO mapreduce.SimplePigStats: Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.7.1-amzn-0 0.14.0-amzn-0 hadoop 2016-02-15 21:39:40 2016-02-15 21:39:56 UNKNOWN
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_1455560055771_0007 1 0 5 5 5 5 0 0 0 0 raw MAP_ONLY hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276,
Input(s):
Successfully read 0 records (10040153 bytes) from: "hdfs://ip-172-31-42-90.ec2.internal:8020/user/hadoop/log"
Output(s):
Successfully stored 0 records in: "hdfs://ip-172-31-42-90.ec2.internal:8020/tmp/temp2003065886/tmp1853785276"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1455560055771_0007
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
16/02/15 21:39:56 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-90.ec2.internal/172.31.42.90:8032
16/02/15 21:39:56 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
178606 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
16/02/15 21:39:56 INFO mapReduceLayer.MapReduceLauncher: Success!
16/02/15 21:39:56 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
178607 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
16/02/15 21:39:56 INFO data.SchemaTupleBackend: Key [pig.schematuple] was not set... will not generate code.
16/02/15 21:39:56 INFO input.FileInputFormat: Total input paths to process : 1
178616 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
16/02/15 21:39:56 INFO util.MapRedUtil: Total input paths to process : 1
grunt>
最佳答案
确保您的日志文件格式正确。我注意到您的日志文件在该行的边缘包含space
。删除该space
并执行相同的脚本。
请参阅我已执行您的脚本,结果如下。
脚本:
grunt> LOAD '~/temp1.log' USING org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader AS (remoteAddr, remoteLogname, user, time, method, uri, proto, status, bytes, referer, userAgent);
grunt> dump raw;
Input(s):
Successfully read 4 records (1090 bytes) from: "~/temp1.log"
Output(s):
Successfully stored 4 records (753 bytes) in: "hdfs://sandbox.hortonworks.com:8020/tmp/temp-53432852/tmp1982058168"
Counters:
Total records written : 4
Total bytes written : 753
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1522732012453_0054
2018-04-03 10:44:33,436 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2018-04-03 10:44:33,436 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2018-04-03 10:44:33,437 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2018-04-03 10:44:33,467 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2018-04-03 10:44:33,620 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2018-04-03 10:44:33,620 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2018-04-03 10:44:33,620 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2018-04-03 10:44:33,635 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2018-04-03 10:44:33,797 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2018-04-03 10:44:33,797 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/172.17.0.2:8050
2018-04-03 10:44:33,799 [main] INFO org.apache.hadoop.yarn.client.AHSProxy - Connecting to Application History server at sandbox.hortonworks.com/172.17.0.2:10200
2018-04-03 10:44:33,813 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2018-04-03 10:44:33,929 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
2018-04-03 10:44:33,939 [main] INFO org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
2018-04-03 10:44:33,955 [main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2018-04-03 10:44:33,955 [main] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
(123.125.67.216,-,-,02/Jan/2012:00:48:27 -0800,GET,/wiki/Dekart,HTTP/1.1,200,4512,-,Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2)
(209.85.238.130,-,-,02/Jan/2012:00:48:27 -0800,GET,/w/index.php?title=Special:RecentChanges&feed=atom,HTTP/1.1,304,260,-,Feedfetcher-Google; (+http://www.google.com/feedfetcher.html; 4 subscribers; feed-id=11568779694056348047))
(123.125.67.213,-,-,02/Jan/2012:00:48:33 -0800,GET,/,HTTP/1.1,301,433,-,Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2)
(123.125.67.214,-,-,02/Jan/2012:00:48:33 -0800,GET,/wiki/Main_Page,HTTP/1.1,200,8647,-,Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2)
关于hadoop - org.apache.pig.piggybank.storage.apachelog.CombinedLogLoader不会以组合文件格式加载文件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35419646/
在流处理方面,Apache Beam和Apache Kafka之间有什么区别? 我也试图掌握技术和程序上的差异。 请通过您的经验报告来帮助我理解。 最佳答案 Beam是一种API,它以一种统一的方式使
有点n00b的问题。 如果我使用 Apache Ignite 进行消息传递和事件处理,是否还需要使用 Kafka? 与 Ignite 相比,Kafka 基本上会给我哪些(如果有的话)额外功能? 提前致
Apache MetaModel 是一个数据访问框架,它为发现、探索和查询不同类型的数据源提供了一个通用接口(interface)。 Apache Drill 是一种无架构的 SQL 查询引擎,它通过
Tomcat是一个广泛使用的java web服务器,而Apache也是一个web服务器,它们在实际项目使用中有什么不同? 经过一些研究,我有了一个简单的想法,比如, Apache Tomcat Ja
既然简单地使用 Apache 就足以运行许多 Web 应用程序,那么人们何时以及为什么除了 Apache 之外还使用 Tomcat? 最佳答案 Apache Tomcat是一个网络服务器和 Java
我在某个 VPS( friend 的带 cPanel 的 apache 服务器)上有一个帐户,我在那里有一个 public_html 目录。我们有大约 5-6 个网站: /home/myusernam
我目前正在尝试将模块加载到 Apache,使用 cmake 构建。该模块称为 mod_mapcache。它已成功构建并正确安装在/usr/lib/apache2/modules directroy 中
我对 url 中的问号有疑问。 例如:我有 url test.com/controller/action/part_1%3Fpart_2 (其中 %3F 是 url 编码的问号),并使用此重写规则:R
在同一台机器上,Apache 在端口 80 上运行,Tomcat 在端口 8080 上运行。 Apache 包括 html;css;js;文件并调用 tomcat 服务。 基本上 exampledom
Apache 1 和 Apache 2 的分支有什么区别? 使用一种或另一种的优点和缺点? 似乎 Apache 2 的缺点之一是使用大量内存,但也许它处理请求的速度更快? 最有趣的是 Apache 作
实际上,我们正在使用 Apache 网络服务器来托管我们的 REST-API。 脚本是用 Lua 编写的,并使用 mod-lua 映射。 例如来自 httpd.conf 的实际片段: [...] Lu
我在 apache 上的 ubuntu 中有一个虚拟主机,这不是我的主要配置,我有另一个网页作为我的主要网页,所以我想使用虚拟主机在同一个 IP 上设置这个。 urologyexpert.mx 是我的
我使用 Apache camel 已经很长时间了,发现它是满足各种系统集成相关业务需求的绝佳解决方案。但是几年前我遇到了 Apache Nifi 解决方案。经过一番谷歌搜索后,我发现虽然 Nifi 可
由于两者都是一次处理事件的流框架,这两种技术/流框架之间的核心架构差异是什么? 此外,在哪些特定用例中,一个比另一个更合适? 最佳答案 正如您所提到的,两者都是实时内存计算的流式平台。但是当您仔细观察
apache 文件(如 httpd.conf 和虚拟主机)中使用的语言名称是什么,例如 # Ensure that Apache listens on port 80 Listen 80 D
作为我学习过程的一部分,我认为如果我扩展更多关于 apache 的知识会很好。我有几个问题,虽然我知道有些内容可能需要相当冗长的解释,但我希望您能提供一个概述,以便我知道去哪里寻找。 (最好引用 mo
关闭。这个问题是opinion-based .它目前不接受答案。 想改善这个问题吗?更新问题,以便可以通过 editing this post 用事实和引文回答问题. 4 个月前关闭。 Improve
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引起辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the he
这个问题在这里已经有了答案: Difference Between Apache Kafka and Camel (Broker vs Integration) (4 个回答) 3年前关闭。 据我所知
我有 2 个使用相同规则的子域,如下所示: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond
我是一名优秀的程序员,十分优秀!