- r - 以节省内存的方式增长 data.frame
- ruby-on-rails - ruby/ruby on rails 内存泄漏检测
- android - 无法解析导入android.support.v7.app
- UNIX 域套接字与共享内存(映射文件)
我正在关注 this code从 UIImage 数组 创建Video。从一幅图像过渡到另一幅图像时,这里没有动画。我想添加一些像这样的照片过渡效果:
这些动画可以通过 UIView.transition()
和 UIView.animate()
完成。
但是如何在从 UIImage 数组制作视频时应用这些过渡动画呢?我已经搜索了很多,但没有找到任何东西。
我也试过 HJImagesToVideo但它只提供 Crossfade
transition 。
最佳答案
我被困在同一个问题中有一段时间了。由于这是一个很大的答案,它可能会让您感到厌烦,但我希望它能对您有所帮助。这是我在 Swift 3 中的回答。
理论:
首先,您必须使用图像阵列制作视频。关于这个问题有很多解决方案。稍微搜索一下,希望你能找到一个。如果没有,请参阅下面的代码。其次,您可能知道可以在视频上添加 CALayer。如果没有,搜索一下如何在视频上添加 CALayer。第三,您必须知道如何为 CALayer 设置动画。 Core Animation 将为您完成这项工作。关于这一点,有大量示例代码可供使用。搜索它。现在你有了一个 CALayer 并且你知道如何为它制作动画。 最后,您必须在视频中添加这个动画 CALayer。
代码:
1.使用您的图像数组制作视频
let outputSize = CGSize(width: 1920, height: 1280)
let imagesPerSecond: TimeInterval = 3 //each image will be stay for 3 secs
var selectedPhotosArray = [UIImage]()
var imageArrayToVideoURL = NSURL()
let audioIsEnabled: Bool = false //if your video has no sound
var asset: AVAsset!
func buildVideoFromImageArray() {
for image in 0..<5 {
selectedPhotosArray.append(UIImage(named: "\(image + 1).JPG")!) //name of the images: 1.JPG, 2.JPG, 3.JPG, 4.JPG, 5.JPG
}
imageArrayToVideoURL = NSURL(fileURLWithPath: NSHomeDirectory() + "/Documents/video1.MP4")
removeFileAtURLIfExists(url: imageArrayToVideoURL)
guard let videoWriter = try? AVAssetWriter(outputURL: imageArrayToVideoURL as URL, fileType: AVFileTypeMPEG4) else {
fatalError("AVAssetWriter error")
}
let outputSettings = [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : NSNumber(value: Float(outputSize.width)), AVVideoHeightKey : NSNumber(value: Float(outputSize.height))] as [String : Any]
guard videoWriter.canApply(outputSettings: outputSettings, forMediaType: AVMediaTypeVideo) else {
fatalError("Negative : Can't apply the Output settings...")
}
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings)
let sourcePixelBufferAttributesDictionary = [kCVPixelBufferPixelFormatTypeKey as String : NSNumber(value: kCVPixelFormatType_32ARGB), kCVPixelBufferWidthKey as String: NSNumber(value: Float(outputSize.width)), kCVPixelBufferHeightKey as String: NSNumber(value: Float(outputSize.height))]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
if videoWriter.startWriting() {
let zeroTime = CMTimeMake(Int64(imagesPerSecond),Int32(1))
videoWriter.startSession(atSourceTime: zeroTime)
assert(pixelBufferAdaptor.pixelBufferPool != nil)
let media_queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: media_queue, using: { () -> Void in
let fps: Int32 = 1
let framePerSecond: Int64 = Int64(self.imagesPerSecond)
let frameDuration = CMTimeMake(Int64(self.imagesPerSecond), fps)
var frameCount: Int64 = 0
var appendSucceeded = true
while (!self.selectedPhotosArray.isEmpty) {
if (videoWriterInput.isReadyForMoreMediaData) {
let nextPhoto = self.selectedPhotosArray.remove(at: 0)
let lastFrameTime = CMTimeMake(frameCount * framePerSecond, fps)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
var pixelBuffer: CVPixelBuffer? = nil
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if let pixelBuffer = pixelBuffer, status == 0 {
let managedPixelBuffer = pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(self.outputSize.width), height: Int(self.outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context!.clear(CGRect(x: 0, y: 0, width: CGFloat(self.outputSize.width), height: CGFloat(self.outputSize.height)))
let horizontalRatio = CGFloat(self.outputSize.width) / nextPhoto.size.width
let verticalRatio = CGFloat(self.outputSize.height) / nextPhoto.size.height
//let aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize: CGSize = CGSize(width: nextPhoto.size.width * aspectRatio, height: nextPhoto.size.height * aspectRatio)
let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0
context?.draw(nextPhoto.cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
} else {
print("Failed to allocate pixel buffer")
appendSucceeded = false
}
}
if !appendSucceeded {
break
}
frameCount += 1
}
videoWriterInput.markAsFinished()
videoWriter.finishWriting { () -> Void in
print("-----video1 url = \(self.imageArrayToVideoURL)")
self.asset = AVAsset(url: self.imageArrayToVideoURL as URL)
self.exportVideoWithAnimation()
}
})
}
}
func removeFileAtURLIfExists(url: NSURL) {
if let filePath = url.path {
let fileManager = FileManager.default
if fileManager.fileExists(atPath: filePath) {
do{
try fileManager.removeItem(atPath: filePath)
} catch let error as NSError {
print("Couldn't remove existing destination file: \(error)")
}
}
}
}
2.为创建的视频添加动画(请仔细阅读所有评论部分。我认为阅读这些部分会弄清楚一些问题。)
func exportVideoWithAnimation() {
let composition = AVMutableComposition()
let track = asset?.tracks(withMediaType: AVMediaTypeVideo)
let videoTrack:AVAssetTrack = track![0] as AVAssetTrack
let timerange = CMTimeRangeMake(kCMTimeZero, (asset?.duration)!)
let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
do {
try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
} catch {
print(error)
}
//if your video has sound, you don’t need to check this
if audioIsEnabled {
let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
for audioTrack in (asset?.tracks(withMediaType: AVMediaTypeAudio))! {
do {
try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
} catch {
print(error)
}
}
}
let size = videoTrack.naturalSize
let videolayer = CALayer()
videolayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let parentlayer = CALayer()
parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
parentlayer.addSublayer(videolayer)
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
//this is the animation part
var time = [0.00001, 3, 6, 9, 12] //I used this time array to determine the start time of a frame animation. Each frame will stay for 3 secs, thats why their difference is 3
var imgarray = [UIImage]()
for image in 0..<5 {
imgarray.append(UIImage(named: "\(image + 1).JPG")!)
let nextPhoto = imgarray[image]
let horizontalRatio = CGFloat(self.outputSize.width) / nextPhoto.size.width
let verticalRatio = CGFloat(self.outputSize.height) / nextPhoto.size.height
let aspectRatio = min(horizontalRatio, verticalRatio)
let newSize: CGSize = CGSize(width: nextPhoto.size.width * aspectRatio, height: nextPhoto.size.height * aspectRatio)
let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0
///I showed 10 animations here. You can uncomment any of this and export a video to see the result.
///#1. left->right///
let blackLayer = CALayer()
blackLayer.frame = CGRect(x: -videoTrack.naturalSize.width, y: 0, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
blackLayer.backgroundColor = UIColor.black.cgColor
let imageLayer = CALayer()
imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
imageLayer.contents = imgarray[image].cgImage
blackLayer.addSublayer(imageLayer)
let animation = CABasicAnimation()
animation.keyPath = "position.x"
animation.fromValue = -videoTrack.naturalSize.width
animation.toValue = 2 * (videoTrack.naturalSize.width)
animation.duration = 3
animation.beginTime = CFTimeInterval(time[image])
animation.fillMode = kCAFillModeForwards
animation.isRemovedOnCompletion = false
blackLayer.add(animation, forKey: "basic")
///#2. right->left///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 2 * videoTrack.naturalSize.width, y: 0, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.x"
// animation.fromValue = 2 * (videoTrack.naturalSize.width)
// animation.toValue = -videoTrack.naturalSize.width
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
///#3. top->bottom///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 0, y: 2 * videoTrack.naturalSize.height, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.y"
// animation.fromValue = 2 * videoTrack.naturalSize.height
// animation.toValue = -videoTrack.naturalSize.height
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
///#4. bottom->top///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 0, y: -videoTrack.naturalSize.height, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.y"
// animation.fromValue = -videoTrack.naturalSize.height
// animation.toValue = 2 * videoTrack.naturalSize.height
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
///#5. opacity(1->0)(left->right)///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: -videoTrack.naturalSize.width, y: 0, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.x"
// animation.fromValue = -videoTrack.naturalSize.width
// animation.toValue = 2 * (videoTrack.naturalSize.width)
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
//
// let fadeOutAnimation = CABasicAnimation(keyPath: "opacity")
// fadeOutAnimation.fromValue = 1
// fadeOutAnimation.toValue = 0
// fadeOutAnimation.duration = 3
// fadeOutAnimation.beginTime = CFTimeInterval(time[image])
// fadeOutAnimation.isRemovedOnCompletion = false
// blackLayer.add(fadeOutAnimation, forKey: "opacity")
///#6. opacity(1->0)(right->left)///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 2 * videoTrack.naturalSize.width, y: 0, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.x"
// animation.fromValue = 2 * videoTrack.naturalSize.width
// animation.toValue = -videoTrack.naturalSize.width
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
//
// let fadeOutAnimation = CABasicAnimation(keyPath: "opacity")
// fadeOutAnimation.fromValue = 1
// fadeOutAnimation.toValue = 0
// fadeOutAnimation.duration = 3
// fadeOutAnimation.beginTime = CFTimeInterval(time[image])
// fadeOutAnimation.isRemovedOnCompletion = false
// blackLayer.add(fadeOutAnimation, forKey: "opacity")
///#7. opacity(1->0)(top->bottom)///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 0, y: 2 * videoTrack.naturalSize.height, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.y"
// animation.fromValue = 2 * videoTrack.naturalSize.height
// animation.toValue = -videoTrack.naturalSize.height
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
//
// let fadeOutAnimation = CABasicAnimation(keyPath: "opacity")
// fadeOutAnimation.fromValue = 1
// fadeOutAnimation.toValue = 0
// fadeOutAnimation.duration = 3
// fadeOutAnimation.beginTime = CFTimeInterval(time[image])
// fadeOutAnimation.isRemovedOnCompletion = false
// blackLayer.add(fadeOutAnimation, forKey: "opacity")
///#8. opacity(1->0)(bottom->top)///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 0, y: -videoTrack.naturalSize.height, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let animation = CABasicAnimation()
// animation.keyPath = "position.y"
// animation.fromValue = -videoTrack.naturalSize.height
// animation.toValue = 2 * videoTrack.naturalSize.height
// animation.duration = 3
// animation.beginTime = CFTimeInterval(time[image])
// animation.fillMode = kCAFillModeForwards
// animation.isRemovedOnCompletion = false
// blackLayer.add(animation, forKey: "basic")
//
// let fadeOutAnimation = CABasicAnimation(keyPath: "opacity")
// fadeOutAnimation.fromValue = 1
// fadeOutAnimation.toValue = 0
// fadeOutAnimation.duration = 3
// fadeOutAnimation.beginTime = CFTimeInterval(time[image])
// fadeOutAnimation.isRemovedOnCompletion = false
// blackLayer.add(fadeOutAnimation, forKey: "opacity")
///#9. scale(small->big->small)///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 0, y: 0, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
// blackLayer.opacity = 0
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let scaleAnimation = CAKeyframeAnimation(keyPath: "transform.scale")
// scaleAnimation.values = [0, 1.0, 0]
// scaleAnimation.beginTime = CFTimeInterval(time[image])
// scaleAnimation.duration = 3
// scaleAnimation.isRemovedOnCompletion = false
// blackLayer.add(scaleAnimation, forKey: "transform.scale")
//
// let fadeInOutAnimation = CABasicAnimation(keyPath: "opacity")
// fadeInOutAnimation.fromValue = 1
// fadeInOutAnimation.toValue = 1
// fadeInOutAnimation.duration = 3
// fadeInOutAnimation.beginTime = CFTimeInterval(time[image])
// fadeInOutAnimation.isRemovedOnCompletion = false
// blackLayer.add(fadeInOutAnimation, forKey: "opacity")
///#10. scale(big->small->big)///
// let blackLayer = CALayer()
// blackLayer.frame = CGRect(x: 0, y: 0, width: videoTrack.naturalSize.width, height: videoTrack.naturalSize.height)
// blackLayer.backgroundColor = UIColor.black.cgColor
// blackLayer.opacity = 0
//
// let imageLayer = CALayer()
// imageLayer.frame = CGRect(x: x, y: y, width: newSize.width, height: newSize.height)
// imageLayer.contents = imgarray[image].cgImage
// blackLayer.addSublayer(imageLayer)
//
// let scaleAnimation = CAKeyframeAnimation(keyPath: "transform.scale")
// scaleAnimation.values = [1, 0, 1]
// scaleAnimation.beginTime = CFTimeInterval(time[image])
// scaleAnimation.duration = 3
// scaleAnimation.isRemovedOnCompletion = false
// blackLayer.add(scaleAnimation, forKey: "transform.scale")
//
// let fadeOutAnimation = CABasicAnimation(keyPath: "opacity")
// fadeOutAnimation.fromValue = 1
// fadeOutAnimation.toValue = 1
// fadeOutAnimation.duration = 3
// fadeOutAnimation.beginTime = CFTimeInterval(time[image])
// fadeOutAnimation.isRemovedOnCompletion = false
// blackLayer.add(fadeOutAnimation, forKey: "opacity")
parentlayer.addSublayer(blackLayer)
}
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
let layercomposition = AVMutableVideoComposition()
layercomposition.frameDuration = CMTimeMake(1, 30)
layercomposition.renderSize = size
layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)
let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)
instruction.layerInstructions = [layerinstruction]
layercomposition.instructions = [instruction]
let animatedVideoURL = NSURL(fileURLWithPath: NSHomeDirectory() + "/Documents/video2.mp4")
removeFileAtURLIfExists(url: animatedVideoURL)
guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {return}
assetExport.videoComposition = layercomposition
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = animatedVideoURL as URL
assetExport.exportAsynchronously(completionHandler: {
switch assetExport.status{
case AVAssetExportSessionStatus.failed:
print("failed \(String(describing: assetExport.error))")
case AVAssetExportSessionStatus.cancelled:
print("cancelled \(String(describing: assetExport.error))")
default:
print("Exported")
}
})
}
不要忘记导入 AVFoundation
关于ios - 从具有不同过渡动画的 UIImage 数组制作视频,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46315867/
我有 table 像这样 -------------------------------------------- id size title priority
我的应用在不同的 Activity (4 个 Activity )中仅包含横幅广告。所以我的疑问是, 我可以对所有横幅广告使用一个广告单元 ID 吗? 或者 每个 Activity 使用不同的广告单元
我有任意(但统一)数字列表的任意列表。 (它们是 n 空间中 bin 的边界坐标,我想绘制其角,但这并不重要。)我想生成所有可能组合的列表。所以:[[1,2], [3,4],[5,6]] 产生 [[1
我刚刚在学校开始学习 Java,正在尝试自定义控件和图形。我目前正在研究图案锁,一开始一切都很好,但突然间它绘制不正确。我确实更改了一些代码,但是当我看到错误时,我立即将其更改回来(撤消,ftw),但
在获取 Distinct 的 Count 时,我在使用 Group By With Rollup 时遇到了一个小问题。 问题是 Rollup 摘要只是所有分组中 Distinct 值的总数,而不是所有
这不起作用: select count(distinct colA, colB) from mytable 我知道我可以通过双选来简单地解决这个问题。 select count(*) from (
这个问题在这里已经有了答案: JavaScript regex whitespace characters (5 个回答) 2年前关闭。 你能解释一下为什么我会得到 false比较 text ===
这个问题已经有答案了: 奥 git _a (56 个回答) 已关闭 9 年前。 我被要求用 Javascript 编写一个函数 sortByFoo 来正确响应此测试: // Does not cras
所以,我不得不说,SQL 是迄今为止我作为开发人员最薄弱的一面。也许我想要完成的事情很简单。我有这样的东西(这不是真正的模型,但为了使其易于理解而不浪费太多时间解释它,我想出了一个完全模仿我必须使用的
这个问题在这里已经有了答案: How does the "this" keyword work? (22 个回答) 3年前关闭。 简而言之:为什么在使用 Objects 时,直接调用的函数和通过引用传
这个问题在这里已经有了答案: 关闭 12 年前。 Possible Duplicate: what is the difference between (.) dot operator and (-
我真的不明白这里发生了什么但是: 当我这样做时: colorIndex += len - stopPos; for(int m = 0; m < len - stopPos; m++) { c
思考 MySQL 中的 Group By 函数的最佳方式是什么? 我正在编写一个 MySQL 查询,通过 ODBC 连接在 Excel 的数据透视表中提取数据,以便用户可以轻松访问数据。 例如,我有:
我想要的SQL是这样的: SELECT week_no, type, SELECT count(distinct user_id) FROM group WHERE pts > 0 FROM bas
商店表: +--+-------+--------+ |id|name |date | +--+-------+--------+ |1 |x |Ma
对于 chrome 和 ff,当涉及到可怕的 ie 时,这个脚本工作完美。有问题 function getY(oElement) { var curtop = 0; if (oElem
我现在无法提供代码,因为我目前正在脑海中研究这个想法并在互联网上四处乱逛。 我了解了进程间通信和使用共享内存在进程之间共享数据(特别是结构)。 但是,在对保存在不同 .c 文件中的程序使用 fork(
我想在用户集合中使用不同的功能。在 mongo shell 中,我可以像下面这样使用: db.users.distinct("name"); 其中名称是用于区分的集合字段。 同样我想要,在 C
List nastava_izvjestaj = new List(); var data_context = new DataEvidencijaDataContext();
我的 Rails 应用程序中有 Ransack 搜索和 Foundation,本地 css 渲染正常,而生产中的同一个应用程序有一个怪癖: 应用程序中的其他内容完全相同。 我在 Chrome 和 Sa
我是一名优秀的程序员,十分优秀!