gpt4 book ai didi

org.openimaj.video.xuggle.XuggleVideo类的使用及代码示例

转载 作者:知者 更新时间:2024-03-22 14:37:05 27 4
gpt4 key购买 nike

本文整理了Java中org.openimaj.video.xuggle.XuggleVideo类的一些代码示例,展示了XuggleVideo类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。XuggleVideo类的具体详情如下:
包路径:org.openimaj.video.xuggle.XuggleVideo
类名称:XuggleVideo

XuggleVideo介绍

[英]Wraps a Xuggle video reader into the OpenIMAJ Video interface.

Some Notes:

The #hasNextFrame() method must attempt to read the next packet in the stream to determine if there is a next frame. That means that it incurs a time penalty. It also means there's various logic in that method and the #getNextFrame() method to avoid reading frames that have already been read. It also means that, to avoid #getCurrentFrame() incorrectly returning a new frame after #hasNextFrame() has been called, the class may be holding two frames (the current frame and the next frame) after #hasNextFrame() has been called.

The constructors have signatures that allow the passing of a boolean that determines whether the video is looped or not. This has a different effect than looping using the VideoDisplay. When the video is set to loop it will loop indefinitely and the timestamp of frames will be consecutive. That is, when the video loops the timestamps will continue to increase. This is in contrast to setting the VideoDisplay end action (using VideoDisplay#setEndAction(org.openimaj.video.VideoDisplay.EndAction)where the looping will reset all timestamps when the video loops.
[中]将Xuggle视频阅读器包装到OpenIMAJ视频接口中。
注意:
#hasNextFrame()方法必须尝试读取流中的下一个数据包,以确定是否存在下一个帧。这意味着它会受到时间惩罚。这还意味着该方法和#getNextFrame()方法中有各种逻辑,可以避免读取已经读取的帧。这还意味着,为了避免在调用#hasNextFrame()后#getCurrentFrame()错误地返回新帧,该类可能在调用#hasNextFrame()后保留两个帧(当前帧和下一帧)。
构造函数具有允许传递布尔值的签名,该布尔值决定视频是否循环。这与使用VideoDisplay循环的效果不同。当视频设置为循环时,它将无限循环,帧的时间戳将是连续的。也就是说,当视频循环时,时间戳将继续增加。这与设置VideoDisplay结束操作(使用VideoDisplay#setEndAction(org.openimaj.video.VideoDisplay.EndAction))形成对比,在该操作中,当视频循环时,循环将重置所有时间戳。

代码示例

代码示例来源:origin: openimaj/openimaj

MultiTouchSurface(){
  String sourceURL = "http://152.78.64.19:8080/foo";
  stream = new XuggleVideo(sourceURL);
}

代码示例来源:origin: openimaj/openimaj

@Override
public void precalc()
{
  this.videoLength = 1000d * video.countFrames() / video.getFPS();
}

代码示例来源:origin: org.openimaj/data-scraping

@Override
public int getHeight() {
  return vid.getHeight();
}

代码示例来源:origin: openimaj/openimaj

/**
 * Implements a precise seeking mechanism based on the Xuggle seek method
 * and the naive seek method which simply reads frames.
 * <p>
 * Note: if you created the video from a {@link DataInput} or
 * {@link InputStream}, you can only seek forwards.
 *
 * @param timestamp
 *            The timestamp to get, in seconds.
 */
public void seekPrecise(double timestamp) {
  // Use the Xuggle seek method first to get near the frame
  this.seek(timestamp);
  // The timestamp field is in milliseconds, so we need to * 1000 to
  // compare
  timestamp *= 1000;
  // Work out the number of milliseconds per frame
  final double timePerFrame = 1000d / this.fps;
  // If we're not in the right place, keep reading until we are.
  // Note the right place is the frame before the timestamp we're given:
  // |---frame 1---|---frame2---|---frame3---|
  // ^- given timestamp
  // ... so we should show frame2 not frame3.
  while (this.timestamp <= timestamp - timePerFrame && this.getNextFrame() != null)
    ;
}

代码示例来源:origin: openimaj/openimaj

@Override
  public void render(final MBFImageRenderer renderer, final Matrix transform, final Rectangle rectangle) {
    if (this.toRender == null) {
      this.toRender = new XuggleVideo(
          VideoColourSIFT.class.getResource("/org/openimaj/demos/video/keyboardcat.flv"), true);
      this.renderToBounds = TransformUtilities.makeTransform(new Rectangle(0, 0, this.toRender.getWidth(),
          this.toRender.getHeight()), rectangle);
    }
    final MBFProjectionProcessor mbfPP = new MBFProjectionProcessor();
    mbfPP.setMatrix(transform.times(this.renderToBounds));
    mbfPP.accumulate(this.toRender.getNextFrame());
    mbfPP.performProjection(0, 0, renderer.getImage());
  }
};

代码示例来源:origin: org.openimaj/FaceTools

System.out.println( "    - Size: "+video.getWidth()+"x"+video.getHeight() );
  System.out.println( "    - Frame Rate: "+video.getFPS() );
  System.out.println( "Detecting shots in video..." );
this.video.reset();
      video.setCurrentFrameIndex(mframe);
      faceFrame = video.getCurrentFrame();
      doneSearching = true;
      pframe += options.seconds * video.getFPS();
      video.setCurrentFrameIndex( pframe );
      faceFrame = video.getCurrentFrame();
        video.getCurrentTimecode()+" ("+video.getTimeStamp()+")" );
      IndependentPair<VideoTimecode, VideoTimecode> timecodes = 
        otf.trackObject( new BasicMBFImageObjectTracker(), video, 
          video.getCurrentTimecode(), f.getBounds(), 
          new TimeFinderListener<Rectangle,MBFImage>()

代码示例来源:origin: openimaj/openimaj

public VideoWithinVideo(String videoPath) throws IOException {
  this.videoFile = new File(videoPath);
  this.video = new XuggleVideo(videoFile, true);
  this.capture = new VideoCapture(320, 240);
  nextCaptureFrame = capture.getNextFrame().clone();
  this.videoRect = new Rectangle(0, 0, video.getWidth(), video.getHeight());
  this.captureToVideo = TransformUtilities.makeTransform(
      new Rectangle(0, 0, capture.getWidth(), capture.getHeight()),
      videoRect
      );
  display = VideoDisplay.createVideoDisplay(video);
  new CaptureVideoSIFT(this);
  display.addVideoListener(this);
  // targetArea = new Polygon(
  // new Point2dImpl(100,100),
  // new Point2dImpl(200,150),
  // new Point2dImpl(200,230),
  // new Point2dImpl(0,200)
  // );
  //
  // Prepare the homography matrix
  pointList = new ArrayList<IndependentPair<Point2d, Point2d>>();
  pointList.add(IndependentPair.pair((Point2d) topLeftB, (Point2d) topLeftS));
  pointList.add(IndependentPair.pair((Point2d) topRightB, (Point2d) topRightS));
  pointList.add(IndependentPair.pair((Point2d) bottomRightB, (Point2d) bottomRightS));
  pointList.add(IndependentPair.pair((Point2d) bottomLeftB, (Point2d) bottomLeftS));
}

代码示例来源:origin: openimaj/openimaj

final XuggleVideo v = new XuggleVideo( new File( filename ) );
final double fps = v.getFPS();

代码示例来源:origin: openimaj/openimaj

video = new XuggleVideo(url);
video.setCurrentFrameIndex( 10 );

代码示例来源:origin: openimaj/openimaj

@Override
public void init(XuggleVideo video){
  this.framesToSkip = (int) (video.getFPS() / this.framesPerSecond);
  if(this.framesToSkip < 1) this.framesToSkip = 1;
  this.framesCount = 0;
}

代码示例来源:origin: org.openimaj/xuggle-video

return;
  this.create(url);
  this.getNextFrame();
else
  logger.error("Seek returned an error value: " + ret + ": "

代码示例来源:origin: org.openimaj/xuggle-video

/**
 * {@inheritDoc}
 *
 * @see org.openimaj.video.Video#getCurrentFrame()
 */
@Override
public MBFImage getCurrentFrame() {
  if (this.currentMBFImage == null)
    this.currentMBFImage = this.getNextFrame();
  return this.currentMBFImage;
}

代码示例来源:origin: openimaj/openimaj

@Override
public int getWidth() {
  return vid.getWidth();
}

代码示例来源:origin: openimaj/openimaj

@Override
public long countFrames() {
  return vid.countFrames();
}

代码示例来源:origin: openimaj/openimaj

@Override
public MBFImage getCurrentFrame() {
  return vid.getCurrentFrame();
}

代码示例来源:origin: openimaj/openimaj

@Override
public void reset() {
  vid.reset();
}

代码示例来源:origin: openimaj/openimaj

@Override
public long getTimeStamp() {
  return vid.getTimeStamp();
}

代码示例来源:origin: org.openimaj/demos

@Override
  public void render(final MBFImageRenderer renderer, final Matrix transform, final Rectangle rectangle) {
    if (this.toRender == null) {
      this.toRender = new XuggleVideo(
          VideoSIFT.class.getResource("/org/openimaj/demos/video/keyboardcat.flv"), true);
      this.renderToBounds = TransformUtilities.makeTransform(new Rectangle(0, 0, this.toRender.getWidth(),
          this.toRender.getHeight()), rectangle);
    }
    final MBFProjectionProcessor mbfPP = new MBFProjectionProcessor();
    mbfPP.setMatrix(transform.times(this.renderToBounds));
    mbfPP.accumulate(this.toRender.getNextFrame());
    mbfPP.performProjection(0, 0, renderer.getImage());
  }
};

代码示例来源:origin: openimaj/openimaj

System.out.println( "    - Size: "+video.getWidth()+"x"+video.getHeight() );
  System.out.println( "    - Frame Rate: "+video.getFPS() );
  System.out.println( "Detecting shots in video..." );
this.video.reset();
      video.setCurrentFrameIndex(mframe);
      faceFrame = video.getCurrentFrame();
      doneSearching = true;
      pframe += options.seconds * video.getFPS();
      video.setCurrentFrameIndex( pframe );
      faceFrame = video.getCurrentFrame();
        video.getCurrentTimecode()+" ("+video.getTimeStamp()+")" );
      IndependentPair<VideoTimecode, VideoTimecode> timecodes = 
        otf.trackObject( new BasicMBFImageObjectTracker(), video, 
          video.getCurrentTimecode(), f.getBounds(), 
          new TimeFinderListener<Rectangle,MBFImage>()

代码示例来源:origin: org.openimaj/sandbox

public VideoWithinVideo(String videoPath) throws IOException {
  this.videoFile = new File(videoPath);
  this.video = new XuggleVideo(videoFile, true);
  this.capture = new VideoCapture(320, 240);
  nextCaptureFrame = capture.getNextFrame().clone();
  this.videoRect = new Rectangle(0, 0, video.getWidth(), video.getHeight());
  this.captureToVideo = TransformUtilities.makeTransform(
      new Rectangle(0, 0, capture.getWidth(), capture.getHeight()),
      videoRect
      );
  display = VideoDisplay.createVideoDisplay(video);
  new CaptureVideoSIFT(this);
  display.addVideoListener(this);
  // targetArea = new Polygon(
  // new Point2dImpl(100,100),
  // new Point2dImpl(200,150),
  // new Point2dImpl(200,230),
  // new Point2dImpl(0,200)
  // );
  //
  // Prepare the homography matrix
  pointList = new ArrayList<IndependentPair<Point2d, Point2d>>();
  pointList.add(IndependentPair.pair((Point2d) topLeftB, (Point2d) topLeftS));
  pointList.add(IndependentPair.pair((Point2d) topRightB, (Point2d) topRightS));
  pointList.add(IndependentPair.pair((Point2d) bottomRightB, (Point2d) bottomRightS));
  pointList.add(IndependentPair.pair((Point2d) bottomLeftB, (Point2d) bottomLeftS));
}

27 4 0
Copyright 2021 - 2024 cfsdn All Rights Reserved 蜀ICP备2022000587号
广告合作:1813099741@qq.com 6ren.com