- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我必须使用 apache poi 和 DFC 将 dql 查询结果写入 java 中的 excel 中。我编写了以下代码,需要对其进行组织才能有效处理。我尝试用其他方法写入 Excel 实用程序,但这无法正常工作。所以我不再调用该方法,只在main方法中编写代码。这是低效的代码。
在第一个 dql 中,我将获取一些属性以及 i_chronicle_id,此 i_chronicle_id 需要传递给 r_child_id 的第二个 dql。我需要将这些属性值添加到 Excel 中。如果 Excel 文件不存在则创建它,如果存在则写入/追加数据。但写入更多数据后,速度会变慢。当我使用 HSSFWorkbook 时,它可以达到的最大行数是 1370。我没有检查 XSSFWorkbook。我尝试搜索所有excel写作帖子,但无法正确实现。所以在这里问一下。请帮助我有效地组织代码,如果该表已满,它应该进入下一张表。如果有任何信息请告诉我。提前致谢!
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.*;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfLoginInfo;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.ss.usermodel.*;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import java.io.*;
import java.util.*;
import java.util.stream.Collectors;
public class MedicalDevicesReport {
private static int j = 0;
public static void main(String[] args) throws DfException {
String chronicleId;
String documentId, documentName, title, domain, primaryGroup, subGroup, artifactName, versionLabel, status, creationDate,
versionNum = null, is_current;
ArrayList<String> author = new ArrayList<>();
ArrayList<String> reviewer = new ArrayList<>();
ArrayList<String> formatReviewer = new ArrayList<>();
ArrayList<String> approver = new ArrayList<>();
ArrayList<String> approvalCompletionTime = new ArrayList<>();
int wfAbortCount = 0;
String authorsF, reviewersF, formatReviewersF, approversF;
String approvalCompletionTimeStamps;
IDfClientX clientX = new DfClientX();
IDfClient dfClient = clientX.getLocalClient();
IDfSessionManager sessionManager = dfClient.newSessionManager();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
loginInfo.setUser("user");
loginInfo.setPassword("password");
sessionManager.setIdentity("docbase", loginInfo);
IDfSession dfSession = sessionManager.getSession("docbase");
System.out.println(dfSession);
IDfQuery idfquery = new DfQuery();
IDfCollection collection1 = null;
IDfCollection collection2 = null;
try {
String dql1 = "select distinct r_object_id, object_name, title, authors, domain, primary_group, subgroup, artifact_name, r_version_label," +
"a_status, r_creation_date, i_chronicle_id from cd_quality_gmp_approved (all) where r_creation_date between " +
"DATE('07/04/2018 00:00:00','mm/dd/yyyy hh:mi:ss') and DATE('07/05/2018 23:59:59','mm/dd/yyyy hh:mi:ss') order by r_creation_date";
idfquery.setDQL(dql1);
collection1 = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
int i = 1;
while(collection1 != null && collection1.next()) {
chronicleId = collection1.getString("i_chronicle_id");
author.add(collection1.getString("authors"));
String dql2 = "select a.r_object_id, a.audited_obj_id, a.event_name as event_name, a.object_name as workflow_name, " +
"doc.object_name as document_name, ra.child_label as document_version, a.owner_name as supervisor_name, " +
"w.tracker_state as task_state, w.start_date as date_sent, a.user_name as task_performer, a.time_stamp as " +
"task_completion_time, a.string_2 as outcome, a.event_source as event_source, a.string_3 as delegation_from, " +
"a.string_4 as delegation_to from dm_audittrail a, d2c_workflow_tracker w, dm_relation ra, dm_sysobject doc " +
"where a.audited_obj_id in (select w.r_object_id from d2c_workflow_tracker w where r_object_id in (select " +
"distinct w.r_object_id from dm_relation r, d2c_workflow_tracker w where r.relation_name = 'D2_WF_TRACKER_DOCUMENT' " +
"and r.child_id = '" + chronicleId + "' and r.parent_id=w.r_object_id)) and a.audited_obj_id=w.r_object_id and " +
"ra.parent_id=w.r_object_id and a.audited_obj_id=ra.parent_id and ((a.event_name='d2_workflow_sent_task' and " +
"a.user_name not in (select user_name from dm_audittrail b where b.event_name in ('d2_workflow_rejected_task', " +
"'d2_workflow_forwarded_task', 'd2_delegation_delegated_task', 'd2_workflow_delegated_task', 'd2_workflow_added_note', " +
"'d2_workflow_aborted') and b.audited_obj_id=a.audited_obj_id)) or (a.event_name in ('d2_workflow_rejected_task', " +
"'d2_workflow_forwarded_task', 'd2_workflow_added_note', 'd2_workflow_aborted') and a.string_2 is not nullstring) or " +
"(a.event_name in ('d2_delegation_delegated_task','d2_workflow_delegated_task', 'd2_workflow_added_note', " +
"'d2_workflow_aborted'))) and doc.i_chronicle_id=ra.child_id and ra.child_label not In ('CURRENT',' ') order by 1 desc;";
idfquery.setDQL(dql2);
collection2 = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
while(collection2 != null && collection2.next()) {
String supervisorName = collection2.getString("supervisor_name");
author.add(supervisorName);
if(collection2.getString("event_name").equals("d2_workflow_aborted")) {
wfAbortCount++;
}
if(collection2.getString("event_source").equals("Review")) {
reviewer.add(collection2.getString("task_performer"));
continue;
}
if(collection2.getString("event_source").equals("Format Review")) {
if(collection2.getString("task_performer").contains("grp_wf_")) {
continue;
} else {
formatReviewer.add(collection2.getString("task_performer"));
continue;
}
}
if((collection2.getString("event_source").equals("First Approval-no Sig")) ||
(collection2.getString("event_source").equals("First Approval")) ||
(collection2.getString("event_source").equals("Second Approval-no Sig")) ||
(collection2.getString("event_source").equals("Second Approval")) ||
(collection2.getString("event_source").contains("Approval"))) {
approver.add(collection2.getString("task_performer"));
approvalCompletionTime.add(collection2.getString("task_completion_time"));
}
}
documentId = collection1.getString("r_object_id");
documentName = collection1.getString("object_name");
title = collection1.getString("title");
domain = collection1.getString("domain");
primaryGroup = collection1.getString("primary_group");
subGroup = collection1.getString("subgroup");
artifactName = collection1.getString("artifact_name");
versionLabel = collection1.getString("r_version_label");
status = collection1.getString("a_status");
creationDate = collection1.getString("r_creation_date");
String temp = versionLabel;
String[] parts = temp.split("(?<=\\D)(?=\\d\\.?\\d)");
if(parts.length > 1) {
versionNum = parts[1];
is_current = parts[0];
} else {
is_current = parts[0];
}
String versionLabelF = versionNum + " " + is_current;
List<String> authors = author.stream().distinct().collect(Collectors.toList());
List<String> reviewers = reviewer.stream().distinct().collect(Collectors.toList());
List<String> formatReviewers = formatReviewer.stream().distinct().collect(Collectors.toList());
List<String> approvers = approver.stream().distinct().collect(Collectors.toList());
List<String> approvalCompletionTimeStamp = approvalCompletionTime.stream().distinct().collect(Collectors.toList());
authorsF = authors.toString().substring(1, authors.toString().length() - 1);
reviewersF = reviewers.toString().substring(1, reviewers.toString().length() - 1);
formatReviewersF = formatReviewers.toString().substring(1, formatReviewers.toString().length() - 1);
approversF = approvers.toString().substring(1, approvers.toString().length() - 1);
approvalCompletionTimeStamps = approvalCompletionTimeStamp.toString().substring(1, approvalCompletionTimeStamp.toString().length() - 1);
author.clear();
reviewer.clear();
formatReviewer.clear();
approver.clear();
approvalCompletionTime.clear();
Workbook workbook = null;
File file = new File("C:\\SubWay TRC\\fetched_reports\\mdreport.xlsx");
try {
if (!file.exists()) {
if (!file.toString().endsWith(".xls")) {
workbook = new XSSFWorkbook();
workbook.createSheet();
}
} else {
workbook = WorkbookFactory.create(new FileInputStream(file));
workbook.createSheet();
}
} catch(IOException ioe) {
ioe.printStackTrace();
}
Row row;
try {
Sheet sheet = workbook.getSheetAt(j);
int last_row = sheet.getLastRowNum();
System.out.println(last_row);
row = sheet.createRow(++last_row);
Map<Integer, Object[]> data = new HashMap<>();
data.put(i, new Object[] {documentId, documentName, title, domain, primaryGroup, subGroup, artifactName, versionLabelF,
status, creationDate, authorsF, reviewersF, formatReviewersF, approversF, approvalCompletionTimeStamps, wfAbortCount});
Set<Integer> key_set = data.keySet();
for(Integer key: key_set) {
Object[] obj_arr = data.get(key);
int cell_num = 0;
for(Object obj: obj_arr) {
Cell cell = row.createCell(cell_num++);
if(obj instanceof String) {
cell.setCellValue((String)obj);
}
}
}
FileOutputStream out = new FileOutputStream("C:\\SubWay TRC\\fetched_reports\\mdreport.xlsx", false);
workbook.write(out);
out.close();
System.out.println("Data added successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
} finally {
if(collection1 != null) {
collection1.close();
}
if(collection2 != null) {
collection2.close();
}
if(dfSession != null) {
sessionManager.release(dfSession);
}
}
}
private static void executeWorkflowAudit(IDfQuery idfquery, IDfSession dfSession, IDfCollection attributeCollection,
String chronicleId, int i) throws DfException {
IDfCollection collection;
String documentId, documentName, title, domain, primaryGroup, subGroup, artifactName, versionLabel, status, creationDate,
versionNum = null, is_current;
ArrayList<String> author = new ArrayList<>();
ArrayList<String> reviewer = new ArrayList<>();
ArrayList<String> formatReviewer = new ArrayList<>();
ArrayList<String> approver = new ArrayList<>();
ArrayList<String> approvalCompletionTime = new ArrayList<>();
int wfAbortCount = 0;
String authorsF, reviewersF, formatReviewersF, approversF;
String approvalCompletionTimeStamps;
String dql = "select a.r_object_id, a.audited_obj_id, a.event_name as event_name, a.object_name as workflow_name, " +
"doc.object_name as document_name, ra.child_label as document_version, a.owner_name as supervisor_name, " +
"w.tracker_state as task_state, w.start_date as date_sent, a.user_name as task_performer, a.time_stamp as " +
"task_completion_time, a.string_2 as outcome, a.event_source as event_source, a.string_3 as delegation_from, " +
"a.string_4 as delegation_to from dm_audittrail a, d2c_workflow_tracker w, dm_relation ra, dm_sysobject doc " +
"where a.audited_obj_id in (select w.r_object_id from d2c_workflow_tracker w where r_object_id in (select " +
"distinct w.r_object_id from dm_relation r, d2c_workflow_tracker w where r.relation_name = 'D2_WF_TRACKER_DOCUMENT' " +
"and r.child_id = '" + chronicleId + "' and r.parent_id=w.r_object_id)) and a.audited_obj_id=w.r_object_id and " +
"ra.parent_id=w.r_object_id and a.audited_obj_id=ra.parent_id and ((a.event_name='d2_workflow_sent_task' and " +
"a.user_name not in (select user_name from dm_audittrail b where b.event_name in ('d2_workflow_rejected_task', " +
"'d2_workflow_forwarded_task', 'd2_delegation_delegated_task', 'd2_workflow_delegated_task', 'd2_workflow_added_note', " +
"'d2_workflow_aborted') and b.audited_obj_id=a.audited_obj_id)) or (a.event_name in ('d2_workflow_rejected_task', " +
"'d2_workflow_forwarded_task', 'd2_workflow_added_note', 'd2_workflow_aborted') and a.string_2 is not nullstring) or " +
"(a.event_name in ('d2_delegation_delegated_task','d2_workflow_delegated_task', 'd2_workflow_added_note', " +
"'d2_workflow_aborted'))) and doc.i_chronicle_id=ra.child_id and ra.child_label not In ('CURRENT',' ') order by 1 desc;";
idfquery.setDQL(dql);
collection = idfquery.execute(dfSession, IDfQuery.READ_QUERY);
while(collection != null && collection.next()) {
String supervisorName = collection.getString("supervisor_name");
author.add(supervisorName);
if(collection.getString("event_name").equals("d2_workflow_aborted")) {
wfAbortCount++;
}
if(collection.getString("event_source").equals("Review")) {
reviewer.add(collection.getString("task_performer"));
continue;
}
if(collection.getString("event_source").equals("Format Review")) {
if(collection.getString("task_performer").contains("grp_wf_")) {
continue;
} else {
formatReviewer.add(collection.getString("task_performer"));
continue;
}
}
if((collection.getString("event_source").equals("First Approval-no Sig")) ||
(collection.getString("event_source").equals("First Approval")) ||
(collection.getString("event_source").equals("Second Approval-no Sig")) ||
(collection.getString("event_source").equals("Second Approval"))) {
approver.add(collection.getString("task_performer"));
approvalCompletionTime.add(collection.getString("task_completion_time"));
}
documentId = attributeCollection.getString("r_object_id");
documentName = attributeCollection.getString("object_name");
title = attributeCollection.getString("title");
domain = attributeCollection.getString("domain");
primaryGroup = attributeCollection.getString("primary_group");
subGroup = attributeCollection.getString("subgroup");
artifactName = attributeCollection.getString("artifact_name");
versionLabel = attributeCollection.getString("r_version_label");
status = attributeCollection.getString("a_status");
creationDate = attributeCollection.getString("r_creation_date");
String temp = versionLabel;
String[] parts = temp.split("(?<=\\D)(?=\\d\\.?\\d)");
if(parts.length > 1) {
versionNum = parts[1];
is_current = parts[0];
} else {
is_current = parts[0];
}
String versionLabelF = versionNum + " " + is_current;
List<String> authors = author.stream().distinct().collect(Collectors.toList());
List<String> reviewers = reviewer.stream().distinct().collect(Collectors.toList());
List<String> formatReviewers = formatReviewer.stream().distinct().collect(Collectors.toList());
List<String> approvers = approver.stream().distinct().collect(Collectors.toList());
List<String> approvalCompletionTimeStamp = approvalCompletionTime.stream().distinct().collect(Collectors.toList());
authorsF = authors.toString().substring(1, authors.toString().length() - 1);
reviewersF = reviewers.toString().substring(1, reviewers.toString().length() - 1);
formatReviewersF = formatReviewers.toString().substring(1, formatReviewers.toString().length() - 1);
approversF = approvers.toString().substring(1, approvers.toString().length() - 1);
approvalCompletionTimeStamps = approvalCompletionTimeStamp.toString().substring(1, approvalCompletionTimeStamp.toString().length() - 1);
author.clear();
reviewer.clear();
formatReviewer.clear();
approver.clear();
approvalCompletionTime.clear();
Workbook workbook = null;
File file = new File("C:\\SubWay TRC\\fetched_reports\\wfperf.xls");
try {
if (!file.exists()) {
if (!file.toString().endsWith(".xlsx")) {
workbook = new HSSFWorkbook();
workbook.createSheet();
}
} else {
workbook = WorkbookFactory.create(new FileInputStream(file));
workbook.createSheet();
}
} catch(IOException ioe) {
ioe.printStackTrace();
}
Row row;
try {
Sheet sheet = workbook.getSheetAt(j);
int last_row = sheet.getLastRowNum();
System.out.println(last_row);
if(last_row == 1370) {
++j;
sheet = workbook.getSheetAt(j);
int last_row_new = sheet.getLastRowNum();
row = sheet.createRow(++last_row_new);
} else {
row = sheet.createRow(++last_row);
}
Map<Integer, Object[]> data = new HashMap<>();
data.put(i, new Object[] {documentId, documentName, title, domain, primaryGroup, subGroup, artifactName, versionLabelF,
status, creationDate, authorsF, reviewersF, formatReviewersF, approversF, approvalCompletionTimeStamps, wfAbortCount});
Set<Integer> key_set = data.keySet();
for(Integer key: key_set) {
Object[] obj_arr = data.get(key);
int cell_num = 0;
for(Object obj: obj_arr) {
Cell cell = row.createCell(cell_num++);
if(obj instanceof String) {
cell.setCellValue((String)obj);
}
}
}
FileOutputStream out = new FileOutputStream("C:\\SubWay TRC\\fetched_reports\\wfperf.xls", false);
workbook.write(out);
out.close();
System.out.println("Data added successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
最佳答案
尝试下面的代码进行组织,在您的方法executeWorkflowAudit()中,您仅在while循环中收集所有属性数据,如果收集没有结果怎么办,这将跳过您想要添加的除工作流数据之外的数据。将属性数据放在 while 循环之外,这样就不会跳过添加初始集合数据。我已经分离了 session 管理器并获取 session 部分。同样,您可以将 DQL 查询保留在单独的类(如 QueryConstants)中并在此处访问。这应该有效,请尝试一下。我不确定最大行数。如果我能找到的话会更新。希望,这对您有帮助!无论如何你可以引用this用于将大量数据写入 Excel。
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.*;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.IDfLoginInfo;
import org.apache.poi.ss.usermodel.*;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.*;
import java.util.stream.Collectors;
public class MedicalDevicesReport {
private static int j = 0;
public static void main(String[] args) throws DfException {
String chronicleId;
ArrayList<String> author = new ArrayList<>();
IDfSessionManager sessionManager = getSessionManager("docbase", "user", "password");
IDfSession dfSession = sessionManager.getSession("docbase");
System.out.println(dfSession);
IDfQuery idfquery = new DfQuery();
IDfCollection collection;
try {
String dql = "select distinct r_object_id, object_name, title, authors, domain, primary_group, subgroup, artifact_name, r_version_label," +
"a_status, r_creation_date, i_chronicle_id from cd_quality_gmp_approved (all) where r_creation_date between " +
"DATE('07/04/2018 00:00:00','mm/dd/yyyy hh:mi:ss') and DATE('07/05/2018 23:59:59','mm/dd/yyyy hh:mi:ss') order by r_creation_date";
idfquery.setDQL(dql);
collection = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
int i = 1;
File file = new File("C:\\SubWay TRC\\fetched_reports\\mdreport.xlsx");
while(collection != null && collection.next()) {
chronicleId = collection.getString("i_chronicle_id");
author.add(collection.getString("authors"));
executeWorkflowAudit(dfSession, collection, idfquery, chronicleId, author, i, file);
i++;
}
} finally {
cleanup(sessionManager, dfSession);
}
}
private static void executeWorkflowAudit(IDfSession dfSession, IDfCollection attributeCollection, IDfQuery idfquery, String chronicleId, ArrayList<String> author,
int i, File file) throws DfException {
IDfCollection collection;
String documentId, documentName, title, domain, primaryGroup, subGroup, artifactName, versionLabel, status, creationDate,
versionNum = null, is_current;
ArrayList<String> reviewer = new ArrayList<>();
ArrayList<String> formatReviewer = new ArrayList<>();
ArrayList<String> approver = new ArrayList<>();
ArrayList<String> approvalCompletionTime = new ArrayList<>();
String authorsF, reviewersF, formatReviewersF, approversF;
String approvalCompletionTimeStamps;
int wfAbortCount = 0;
String dql = "select a.r_object_id, a.audited_obj_id, a.event_name as event_name, a.object_name as workflow_name, " +
"doc.object_name as document_name, ra.child_label as document_version, a.owner_name as supervisor_name, " +
"w.tracker_state as task_state, w.start_date as date_sent, a.user_name as task_performer, a.time_stamp as " +
"task_completion_time, a.string_2 as outcome, a.event_source as event_source, a.string_3 as delegation_from, " +
"a.string_4 as delegation_to from dm_audittrail a, d2c_workflow_tracker w, dm_relation ra, dm_sysobject doc " +
"where a.audited_obj_id in (select w.r_object_id from d2c_workflow_tracker w where r_object_id in (select " +
"distinct w.r_object_id from dm_relation r, d2c_workflow_tracker w where r.relation_name = 'D2_WF_TRACKER_DOCUMENT' " +
"and r.child_id = '" + chronicleId + "' and r.parent_id=w.r_object_id)) and a.audited_obj_id=w.r_object_id and " +
"ra.parent_id=w.r_object_id and a.audited_obj_id=ra.parent_id and ((a.event_name='d2_workflow_sent_task' and " +
"a.user_name not in (select user_name from dm_audittrail b where b.event_name in ('d2_workflow_rejected_task', " +
"'d2_workflow_forwarded_task', 'd2_delegation_delegated_task', 'd2_workflow_delegated_task', 'd2_workflow_added_note', " +
"'d2_workflow_aborted') and b.audited_obj_id=a.audited_obj_id)) or (a.event_name in ('d2_workflow_rejected_task', " +
"'d2_workflow_forwarded_task', 'd2_workflow_added_note', 'd2_workflow_aborted') and a.string_2 is not nullstring) or " +
"(a.event_name in ('d2_delegation_delegated_task','d2_workflow_delegated_task', 'd2_workflow_added_note', " +
"'d2_workflow_aborted'))) and doc.i_chronicle_id=ra.child_id and ra.child_label not In ('CURRENT',' ') order by 1 desc";
idfquery.setDQL(dql);
collection = idfquery.execute(dfSession, IDfQuery.READ_QUERY);
while(collection != null && collection.next()) {
String supervisorName = collection.getString("supervisor_name");
author.add(supervisorName);
if(collection.getString("event_name").equals("d2_workflow_aborted")) {
wfAbortCount++;
}
if(collection.getString("event_source").equals("Review")) {
reviewer.add(collection.getString("task_performer"));
continue;
}
if(collection.getString("event_source").equals("Format Review")) {
if(collection.getString("task_performer").contains("grp_wf_")) {
continue;
} else {
formatReviewer.add(collection.getString("task_performer"));
continue;
}
}
if((collection.getString("event_source").equals("First Approval-no Sig")) ||
(collection.getString("event_source").equals("First Approval")) ||
(collection.getString("event_source").equals("Second Approval-no Sig")) ||
(collection.getString("event_source").equals("Second Approval")) ||
(collection.getString("event_source").contains("Approval"))) {
approver.add(collection.getString("task_performer"));
approvalCompletionTime.add(collection.getString("task_completion_time"));
}
}
documentId = attributeCollection.getString("r_object_id");
documentName = attributeCollection.getString("object_name");
title = attributeCollection.getString("title");
domain = attributeCollection.getString("domain");
primaryGroup = attributeCollection.getString("primary_group");
subGroup = attributeCollection.getString("subgroup");
artifactName = attributeCollection.getString("artifact_name");
versionLabel = attributeCollection.getString("r_version_label");
status = attributeCollection.getString("a_status");
creationDate = attributeCollection.getString("r_creation_date");
String temp = versionLabel;
String[] parts = temp.split("(?<=\\D)(?=\\d\\.?\\d)");
if(parts.length > 1) {
versionNum = parts[1];
is_current = parts[0];
} else {
is_current = parts[0];
}
String versionLabelF = versionNum + " " + is_current;
List<String> authors = author.stream().distinct().collect(Collectors.toList());
List<String> reviewers = reviewer.stream().distinct().collect(Collectors.toList());
List<String> formatReviewers = formatReviewer.stream().distinct().collect(Collectors.toList());
List<String> approvers = approver.stream().distinct().collect(Collectors.toList());
List<String> approvalCompletionTimeStamp = approvalCompletionTime.stream().distinct().collect(Collectors.toList());
authorsF = authors.toString().substring(1, authors.toString().length() - 1);
reviewersF = reviewers.toString().substring(1, reviewers.toString().length() - 1);
formatReviewersF = formatReviewers.toString().substring(1, formatReviewers.toString().length() - 1);
approversF = approvers.toString().substring(1, approvers.toString().length() - 1);
approvalCompletionTimeStamps = approvalCompletionTimeStamp.toString().substring(1, approvalCompletionTimeStamp.toString().length() - 1);
author.clear();
reviewer.clear();
formatReviewer.clear();
approver.clear();
approvalCompletionTime.clear();
Workbook workbook = null;
try {
if (!file.exists()) {
if (!file.toString().endsWith(".xls")) {
workbook = new XSSFWorkbook();
workbook.createSheet();
}
} else {
workbook = WorkbookFactory.create(new FileInputStream(file));
workbook.createSheet();
}
} catch(IOException ioe) {
ioe.printStackTrace();
}
Row row;
try {
Sheet sheet = workbook.getSheetAt(j);
int last_row = sheet.getLastRowNum();
System.out.println(last_row);
row = sheet.createRow(++last_row);
Map<Integer, Object[]> data = new HashMap<>();
data.put(i, new Object[] {documentId, documentName, title, domain, primaryGroup, subGroup, artifactName, versionLabelF,
status, creationDate, authorsF, reviewersF, formatReviewersF, approversF, approvalCompletionTimeStamps, wfAbortCount});
Set<Integer> key_set = data.keySet();
for(Integer key: key_set) {
Object[] obj_arr = data.get(key);
int cell_num = 0;
for(Object obj: obj_arr) {
Cell cell = row.createCell(cell_num++);
if(obj instanceof String) {
cell.setCellValue((String)obj);
}
}
}
FileOutputStream out = new FileOutputStream("C:\\SubWay TRC\\fetched_reports\\mdreport.xlsx", false);
workbook.write(out);
out.close();
System.out.println("Data added successfully");
} catch (IOException e) {
e.printStackTrace();
} finally {
if(collection != null) {
collection.close();
}
}
}
private static IDfSessionManager getSessionManager(String docbase, String userName, String password) throws DfException {
IDfClientX clientX = new DfClientX();
IDfClient client = clientX.getLocalClient();
IDfSessionManager sessionManager = client.newSessionManager();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
loginInfo.setUser(userName);
loginInfo.setPassword(password);
sessionManager.setIdentity(docbase, loginInfo);
return sessionManager;
}
public static void cleanup(IDfSessionManager sessionManager, IDfSession session) {
if(sessionManager != null && session != null) {
sessionManager.release(session);
}
}
}
关于java - 使用 apache poi 和 DFC 在 java 中编写、附加 DQL 查询结果到 excel,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57056644/
在流处理方面,Apache Beam和Apache Kafka之间有什么区别? 我也试图掌握技术和程序上的差异。 请通过您的经验报告来帮助我理解。 最佳答案 Beam是一种API,它以一种统一的方式使
有点n00b的问题。 如果我使用 Apache Ignite 进行消息传递和事件处理,是否还需要使用 Kafka? 与 Ignite 相比,Kafka 基本上会给我哪些(如果有的话)额外功能? 提前致
Apache MetaModel 是一个数据访问框架,它为发现、探索和查询不同类型的数据源提供了一个通用接口(interface)。 Apache Drill 是一种无架构的 SQL 查询引擎,它通过
Tomcat是一个广泛使用的java web服务器,而Apache也是一个web服务器,它们在实际项目使用中有什么不同? 经过一些研究,我有了一个简单的想法,比如, Apache Tomcat Ja
既然简单地使用 Apache 就足以运行许多 Web 应用程序,那么人们何时以及为什么除了 Apache 之外还使用 Tomcat? 最佳答案 Apache Tomcat是一个网络服务器和 Java
我在某个 VPS( friend 的带 cPanel 的 apache 服务器)上有一个帐户,我在那里有一个 public_html 目录。我们有大约 5-6 个网站: /home/myusernam
我目前正在尝试将模块加载到 Apache,使用 cmake 构建。该模块称为 mod_mapcache。它已成功构建并正确安装在/usr/lib/apache2/modules directroy 中
我对 url 中的问号有疑问。 例如:我有 url test.com/controller/action/part_1%3Fpart_2 (其中 %3F 是 url 编码的问号),并使用此重写规则:R
在同一台机器上,Apache 在端口 80 上运行,Tomcat 在端口 8080 上运行。 Apache 包括 html;css;js;文件并调用 tomcat 服务。 基本上 exampledom
Apache 1 和 Apache 2 的分支有什么区别? 使用一种或另一种的优点和缺点? 似乎 Apache 2 的缺点之一是使用大量内存,但也许它处理请求的速度更快? 最有趣的是 Apache 作
实际上,我们正在使用 Apache 网络服务器来托管我们的 REST-API。 脚本是用 Lua 编写的,并使用 mod-lua 映射。 例如来自 httpd.conf 的实际片段: [...] Lu
我在 apache 上的 ubuntu 中有一个虚拟主机,这不是我的主要配置,我有另一个网页作为我的主要网页,所以我想使用虚拟主机在同一个 IP 上设置这个。 urologyexpert.mx 是我的
我使用 Apache camel 已经很长时间了,发现它是满足各种系统集成相关业务需求的绝佳解决方案。但是几年前我遇到了 Apache Nifi 解决方案。经过一番谷歌搜索后,我发现虽然 Nifi 可
由于两者都是一次处理事件的流框架,这两种技术/流框架之间的核心架构差异是什么? 此外,在哪些特定用例中,一个比另一个更合适? 最佳答案 正如您所提到的,两者都是实时内存计算的流式平台。但是当您仔细观察
apache 文件(如 httpd.conf 和虚拟主机)中使用的语言名称是什么,例如 # Ensure that Apache listens on port 80 Listen 80 D
作为我学习过程的一部分,我认为如果我扩展更多关于 apache 的知识会很好。我有几个问题,虽然我知道有些内容可能需要相当冗长的解释,但我希望您能提供一个概述,以便我知道去哪里寻找。 (最好引用 mo
关闭。这个问题是opinion-based .它目前不接受答案。 想改善这个问题吗?更新问题,以便可以通过 editing this post 用事实和引文回答问题. 4 个月前关闭。 Improve
就目前而言,这个问题不适合我们的问答形式。我们希望答案得到事实、引用或专业知识的支持,但这个问题可能会引起辩论、争论、投票或扩展讨论。如果您觉得这个问题可以改进并可能重新打开,visit the he
这个问题在这里已经有了答案: Difference Between Apache Kafka and Camel (Broker vs Integration) (4 个回答) 3年前关闭。 据我所知
我有 2 个使用相同规则的子域,如下所示: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteCond
我是一名优秀的程序员,十分优秀!