- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
基于此SO post ,下面的代码会旋转、居中和裁剪用户实时捕获的视频。
捕获 session 使用 AVCaptureSessionPresetHigh 作为预设值,预览层使用 AVLayerVideoGravityResizeAspectFill 作为视频重力。这个预览非常清晰。
然而,导出的视频并没有那么清晰,表面上是因为从 5S 后置摄像头的 1920x1080 分辨率缩放到 320x568(导出视频的目标尺寸)会因为丢弃像素而导致模糊?
假设没有办法从 1920x1080 缩放到 320x568 没有一些模糊性,那么问题就变成了:如何模拟预览层的清晰度?
不知何故,Apple 正在使用一种算法将 1920x1080 视频转换为 320x568 的清晰预览帧。
有没有办法用 AVAssetWriter 或 AVAssetExportSession 来模仿这个?
func cropVideo() {
// Set start time
let startTime = NSDate().timeIntervalSince1970
// Create main composition & its tracks
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
// Get source video & audio tracks
let videoPath = getFilePath(curSlice!.getCaptureURL())
let videoURL = NSURL(fileURLWithPath: videoPath)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
let videoSize = sourceVideoTrack.naturalSize
// Get rounded time for video
let roundedDur = floor(curSlice!.getDur() * 100) / 100
let videoDur = CMTimeMakeWithSeconds(roundedDur, 100)
// Add source tracks to composition
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoDur), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}
// Create video composition
// -- Set video frame
let outputSize = view.bounds.size
let videoComposition = AVMutableVideoComposition()
print("Video composition duration: \(CMTimeGetSeconds(mainComposition.duration))")
// -- Set parent layer
let parentLayer = CALayer()
parentLayer.frame = CGRectMake(0, 0, outputSize.width, outputSize.height)
parentLayer.contentsGravity = kCAGravityResizeAspectFill
// -- Set composition props
videoComposition.renderSize = CGSize(width: outputSize.width, height: outputSize.height)
videoComposition.frameDuration = CMTimeMake(1, Int32(frameRate))
// -- Create video composition instruction
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoDur)
// -- Use layer instruction to match video to output size, mimicking AVLayerVideoGravityResizeAspectFill
let videoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: compositionVideoTrack)
let videoTransform = getResizeAspectFillTransform(videoSize, outputSize: outputSize)
videoLayerInstruction.setTransform(videoTransform, atTime: kCMTimeZero)
// -- Add layer instruction
instruction.layerInstructions = [videoLayerInstruction]
videoComposition.instructions = [instruction]
// -- Create video layer
let videoLayer = CALayer()
videoLayer.frame = parentLayer.frame
// -- Add sublayers to parent layer
parentLayer.addSublayer(videoLayer)
// -- Set animation tool
videoComposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, inLayer: parentLayer)
// Create exporter
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)!
exporter.outputURL = NSURL(fileURLWithPath: outputURL)
exporter.outputFileType = AVFileTypeMPEG4
exporter.videoComposition = videoComposition
exporter.shouldOptimizeForNetworkUse = true
exporter.canPerformMultiplePassesOverSourceMediaData = true
// Export to video
exporter.exportAsynchronouslyWithCompletionHandler({
// Log status
let asset = AVAsset(URL: exporter.outputURL!)
print("Exported slice video. Tracks: \(asset.tracks.count). Duration: \(CMTimeGetSeconds(asset.duration)). Size: \(exporter.estimatedOutputFileLength). Status: \(getExportStatus(exporter)). Output URL: \(exporter.outputURL!). Export time: \( NSDate().timeIntervalSince1970 - startTime).")
// Tell delegate
//delegate.didEndExport(exporter)
self.curSlice!.setOutputURL(exporter.outputURL!.lastPathComponent!)
gUser.save()
})
}
// Returns transform, mimicking AVLayerVideoGravityResizeAspectFill, that converts video of <inputSize> to one of <outputSize>
private func getResizeAspectFillTransform(videoSize: CGSize, outputSize: CGSize) -> CGAffineTransform {
// Compute ratios between video & output sizes
let widthRatio = outputSize.width / videoSize.width
let heightRatio = outputSize.height / videoSize.height
// Set scale to larger of two ratios since goal is to fill output bounds
let scale = widthRatio >= heightRatio ? widthRatio : heightRatio
// Compute video size after scaling
let newWidth = videoSize.width * scale
let newHeight = videoSize.height * scale
// Compute translation required to center image after scaling
// -- Assumes CoreAnimationTool places video frame at (0, 0). Because scale transform is applied first, we must adjust
// each translation point by scale factor.
let translateX = (outputSize.width - newWidth) / 2 / scale
let translateY = (outputSize.height - newHeight) / 2 / scale
// Set transform to resize video while retaining aspect ratio
let resizeTransform = CGAffineTransformMakeScale(scale, scale)
// Apply translation & create final transform
let finalTransform = CGAffineTransformTranslate(resizeTransform, translateX, translateY)
// Return final transform
return finalTransform
}
最佳答案
试试这个。在 Swift 中启动一个新的 Single View 项目,用这段代码替换 ViewController,你应该很高兴!
我已经设置了一个与输出大小不同的 previewLayer,在文件顶部进行更改。
我添加了一些基本的方向支持。 Landscape Vs 的输出尺寸略有不同。肖像。您可以在此处指定您喜欢的任何视频尺寸尺寸,它应该可以正常工作。
查看 videoSettings 字典(第 278 行)以获取输出文件的编解码器和大小。您还可以在此处添加其他设置来处理 keyFrameIntervals 等以调整输出大小。
我添加了一个录制图像以显示它何时录制(点击开始,点击结束),您需要将一些 Assets 添加到 Assets.xcassets 中,称为录制(或注释掉它尝试加载它的第 106 行)。
差不多就是这样。祝你好运!
哦,它是将视频转储到项目目录中,您需要转到 Window/Devices 并下载 Container 才能轻松查看视频。在 TODO 中有一个部分,您可以在其中连接并将文件复制到 PhotoLibrary(使测试更容易)。
import UIKit
import AVFoundation
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate {
let CAPTURE_SIZE_LANDSCAPE: CGSize = CGSizeMake(1280, 720)
let CAPTURE_SIZE_PORTRAIT: CGSize = CGSizeMake(720, 1280)
var recordingImage : UIImageView = UIImageView()
var previewLayer : AVCaptureVideoPreviewLayer?
var audioQueue : dispatch_queue_t?
var videoQueue : dispatch_queue_t?
let captureSession = AVCaptureSession()
var assetWriter : AVAssetWriter?
var assetWriterInputCamera : AVAssetWriterInput?
var assetWriterInputAudio : AVAssetWriterInput?
var outputConnection: AVCaptureConnection?
var captureDeviceBack : AVCaptureDevice?
var captureDeviceFront : AVCaptureDevice?
var captureDeviceMic : AVCaptureDevice?
var sessionSetupDone: Bool = false
var isRecordingStarted = false
//var recordingStartedTime = kCMTimeZero
var videoOutputURL : NSURL?
var captureSize: CGSize = CGSizeMake(1280, 720)
var previewFrame: CGRect = CGRectMake(0, 0, 180, 360)
var captureDeviceTrigger = true
var captureDevice: AVCaptureDevice? {
get {
return captureDeviceTrigger ? captureDeviceFront : captureDeviceBack
}
}
override func supportedInterfaceOrientations() -> UIInterfaceOrientationMask {
return UIInterfaceOrientationMask.AllButUpsideDown
}
override func shouldAutorotate() -> Bool {
if isRecordingStarted {
return false
}
if UIDevice.currentDevice().orientation == UIDeviceOrientation.PortraitUpsideDown {
return false
}
if let cameraPreview = self.previewLayer {
if let connection = cameraPreview.connection {
if connection.supportsVideoOrientation {
switch UIDevice.currentDevice().orientation {
case .LandscapeLeft:
connection.videoOrientation = .LandscapeRight
case .LandscapeRight:
connection.videoOrientation = .LandscapeLeft
case .Portrait:
connection.videoOrientation = .Portrait
case .FaceUp:
return false
case .FaceDown:
return false
default:
break
}
}
}
}
return true
}
override func viewDidLoad() {
super.viewDidLoad()
setupViewControls()
//self.recordingStartedTime = kCMTimeZero
// Setup capture session related logic
videoQueue = dispatch_queue_create("video_write_queue", DISPATCH_QUEUE_SERIAL)
audioQueue = dispatch_queue_create("audio_write_queue", DISPATCH_QUEUE_SERIAL)
setupCaptureDevices()
pre_start()
}
//MARK: UI methods
func setupViewControls() {
// TODO: I have an image (red circle) in an Assets.xcassets. Replace the following with your own image
recordingImage.frame = CGRect(x: 0, y: 0, width: 50, height: 50)
recordingImage.image = UIImage(named: "recording")
recordingImage.hidden = true
self.view.addSubview(recordingImage)
// Setup tap to record and stop
let tapGesture = UITapGestureRecognizer(target: self, action: "didGetTapped:")
tapGesture.numberOfTapsRequired = 1
self.view.addGestureRecognizer(tapGesture)
}
func didGetTapped(selector: UITapGestureRecognizer) {
if self.isRecordingStarted {
self.view.gestureRecognizers![0].enabled = false
recordingImage.hidden = true
self.stopRecording()
} else {
recordingImage.hidden = false
self.startRecording()
}
self.isRecordingStarted = !self.isRecordingStarted
}
func switchCamera(selector: UIButton) {
self.captureDeviceTrigger = !self.captureDeviceTrigger
pre_start()
}
//MARK: Video logic
func setupCaptureDevices() {
let devices = AVCaptureDevice.devices()
for device in devices {
if device.hasMediaType(AVMediaTypeVideo) {
if device.position == AVCaptureDevicePosition.Front {
captureDeviceFront = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Front camera is found")
}
if device.position == AVCaptureDevicePosition.Back {
captureDeviceBack = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Back camera is found")
}
}
if device.hasMediaType(AVMediaTypeAudio) {
captureDeviceMic = device as? AVCaptureDevice
NSLog("Video Controller: Setup. Audio device is found")
}
}
}
func alertPermission() {
let permissionAlert = UIAlertController(title: "No Permission", message: "Please allow access to Camera and Microphone", preferredStyle: UIAlertControllerStyle.Alert)
permissionAlert.addAction(UIAlertAction(title: "Go to settings", style: .Default, handler: { (action: UIAlertAction!) in
print("Video Controller: Permission for camera/mic denied. Going to settings")
UIApplication.sharedApplication().openURL(NSURL(string: UIApplicationOpenSettingsURLString)!)
print(UIApplicationOpenSettingsURLString)
}))
presentViewController(permissionAlert, animated: true, completion: nil)
}
func pre_start() {
NSLog("Video Controller: pre_start")
let videoPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeVideo)
let audioPermission = AVCaptureDevice.authorizationStatusForMediaType(AVMediaTypeAudio)
if (videoPermission == AVAuthorizationStatus.Denied) || (audioPermission == AVAuthorizationStatus.Denied) {
self.alertPermission()
pre_start()
return
}
if (videoPermission == AVAuthorizationStatus.Authorized) {
self.start()
return
}
AVCaptureDevice.requestAccessForMediaType(AVMediaTypeVideo, completionHandler: { (granted :Bool) -> Void in
self.pre_start()
})
}
func start() {
NSLog("Video Controller: start")
if captureSession.running {
captureSession.beginConfiguration()
if let currentInput = captureSession.inputs[0] as? AVCaptureInput {
captureSession.removeInput(currentInput)
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch {
print("Video Controller: begin session. Error adding video input device")
}
captureSession.commitConfiguration()
return
}
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureDeviceMic))
} catch {
print("Video Controller: start. error adding device: \(error)")
}
if let layer = AVCaptureVideoPreviewLayer(session: captureSession) {
self.previewLayer = layer
layer.videoGravity = AVLayerVideoGravityResizeAspect
if let layerConnection = layer.connection {
if UIDevice.currentDevice().orientation == .LandscapeRight {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
layerConnection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
} else if UIDevice.currentDevice().orientation == .Portrait {
layerConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
}
}
// TODO: Set the output size of the Preview Layer here
layer.frame = previewFrame
self.view.layer.insertSublayer(layer, atIndex: 0)
}
let bufferVideoQueue = dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: bufferVideoQueue)
captureSession.addOutput(videoOutput)
if let connection = videoOutput.connectionWithMediaType(AVMediaTypeVideo) {
self.outputConnection = connection
}
let bufferAudioQueue = dispatch_queue_create("audio buffer delegate", DISPATCH_QUEUE_SERIAL)
let audioOutput = AVCaptureAudioDataOutput()
audioOutput.setSampleBufferDelegate(self, queue: bufferAudioQueue)
captureSession.addOutput(audioOutput)
captureSession.startRunning()
}
func getAssetWriter() -> AVAssetWriter? {
NSLog("Video Controller: getAssetWriter")
let fileManager = NSFileManager.defaultManager()
let urls = fileManager.URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)
guard let documentDirectory: NSURL = urls.first else {
print("Video Controller: getAssetWriter: documentDir Error")
return nil
}
let local_video_name = NSUUID().UUIDString + ".mp4"
self.videoOutputURL = documentDirectory.URLByAppendingPathComponent(local_video_name)
guard let url = self.videoOutputURL else {
return nil
}
self.assetWriter = try? AVAssetWriter(URL: url, fileType: AVFileTypeMPEG4)
guard let writer = self.assetWriter else {
return nil
}
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
]
assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
assetWriterInputCamera?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputCamera!)
let audioSettings : [String : AnyObject] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : 2,
AVSampleRateKey : NSNumber(double: 44100.0)
]
assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterInputAudio?.expectsMediaDataInRealTime = true
writer.addInput(assetWriterInputAudio!)
return writer
}
func configurePreset() {
NSLog("Video Controller: configurePreset")
if captureSession.canSetSessionPreset(AVCaptureSessionPreset1280x720) {
captureSession.sessionPreset = AVCaptureSessionPreset1280x720
} else {
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
}
}
func startRecording() {
NSLog("Video Controller: Start recording")
captureSize = UIDeviceOrientationIsLandscape(UIDevice.currentDevice().orientation) ? CAPTURE_SIZE_LANDSCAPE : CAPTURE_SIZE_PORTRAIT
if let connection = self.outputConnection {
if connection.supportsVideoOrientation {
if UIDevice.currentDevice().orientation == .LandscapeRight {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
NSLog("orientation: right")
} else if UIDevice.currentDevice().orientation == .LandscapeLeft {
connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
NSLog("orientation: left")
} else {
connection.videoOrientation = AVCaptureVideoOrientation.Portrait
NSLog("orientation: portrait")
}
}
}
if let writer = getAssetWriter() {
self.assetWriter = writer
let recordingClock = self.captureSession.masterClock
writer.startWriting()
writer.startSessionAtSourceTime(CMClockGetTime(recordingClock))
}
}
func stopRecording() {
NSLog("Video Controller: Stop recording")
if let writer = self.assetWriter {
writer.finishWritingWithCompletionHandler{Void in
print("Recording finished")
// TODO: Handle the video file, copy it from the temp directory etc.
}
}
}
//MARK: Implementation for AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
if !self.isRecordingStarted {
return
}
if let audio = self.assetWriterInputAudio where connection.audioChannels.count > 0 && audio.readyForMoreMediaData {
dispatch_async(audioQueue!) {
audio.appendSampleBuffer(sampleBuffer)
}
return
}
if let camera = self.assetWriterInputCamera where camera.readyForMoreMediaData {
dispatch_async(videoQueue!) {
camera.appendSampleBuffer(sampleBuffer)
}
}
}
}
let videoSettings: [String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : captureSize.width,
AVVideoHeightKey : captureSize.height,
AVVideoCompressionPropertiesKey : [
AVVideoAverageBitRateKey : 2000000,
AVVideoProfileLevelKey : H264_Main_4_1,
AVVideoMaxKeyFrameIntervalKey : 90,
]
]
关于ios - 模拟 AVLayerVideoGravityResizeAspectFill : crop and center video to mimic preview without losing sharpness,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/35261603/
我正在尝试裁剪一张图片,然后将裁剪后的图片粘贴到另一张图片的中心。理想情况下,我希望裁剪后的图像小于粘贴图像,以便粘贴图像周围有边框,但我不知道这是否可能。 这是我尝试过的方法(以及产生的错误消息):
我尝试从 SD 卡中选择一张图片,然后对其进行裁剪。ACTION_PICK 可以,但是当我调用 ACTION_CROP 时,我的系统图库应用程序(我称之为 A)无法完成操作,但另一个应用程序 (B)
我有一个类,它接收用相机拍摄的帧。然后,它将帧传递给相机计算器,后者根据相机计算器的宽高比设置进行一些处理和裁剪。 当我有... @Override public void receivePictur
我正在尝试使用 JQuery cropper plugin 实现裁剪图像上传系统但是我在裁剪图像部分时遇到了困难 问题是 size of the cropped image is larger tha
我将如何找到下图中数字周围空白区域的边界框或窗口?: 原图: 高度:762 像素宽度:1014 像素 目标: 类似于:{x-bound:[x-upper,x-lower], y-bound:[y-up
选择图像后,我在两个地方渲染它,一个在 react-easy-crop(4:3 宽高比)中,另一个在单独的 div(960w*510h) 中。所以每当我改变我的裁剪位置时在 react-easy-cr
我目前正在构建一个深度学习模型来识别图像。据我所知,数据增强(例如随机裁剪图像)将减少模型的过度拟合。然而,我不确定过度这样做是否会导致模型变得更糟糕。当然,我可以尝试一种裁剪较多的一种,一种裁剪较少
想要自动化并仅捕获 map 部分并将其保存到不同的文件中,即删除矩形之外的任何内容。我尝试探索 Photoshop 的魔棒和优化边缘工具,但不确定我的方法是否遗漏了任何内容。看起来裁剪或修剪可能无法自
本质上,我想要的是创建一个带有裁剪图形的Button。从某种意义上来说,Image 是在裁剪的,Button 是一个洞,显示的是Graphic。截至目前,它看起来像 但是,我希望图形适合按钮,即使它更
我得到了一个尺寸为宽度x高度的元素矩阵,该矩阵是动态分配的。我需要“裁剪”由以下矩形定义的该矩阵的一部分: (start_col, start_line) (end_col, end_line) 示例
好吧,这听起来像是一个疯狂的问题,但我的网站上有一个 Google map (API v3),有时我在 map 上有大量标记。是否存在任何插件,或者编写一个插件有多困难(入门代码?),这将允许我让用户
这是我的困境。我有一个 315x210px 的盒子,以及一堆各种大小的图像,一些具有疯狂的宽度/高度比,例如 210:1,而另一些则具有 2:3 等的比例。 我正在尝试将这些图像嵌入框中,并让它们尽可
我正在使用一些 javascript 使图像看起来像是被裁剪了。我基本上是找到高度超过 400 像素的图像,找出它们有多少像素超过 400 像素,将其拆分,并在顶部和底部制作负边距。然后,我将图像包裹
有人知道如何在 flutter 中裁剪捕获的图像吗?我只是想通过裁剪捕获的图像来识别特定的图像文本。 我只想裁剪 hello world 并从图像中排除 Acer。我想像第二张图片一样裁剪。非常感谢任
我需要获取图像的特定裁剪并将其放在另一个图像的某个位置并调整大小。 我可以在一个命令中裁剪第一个图像并将其保存到一个文件中,然后我可以在另一个命令中合成这两个图像。 但是,我想在一个命令中完成它 -
我正在尝试使用更大的背景图像制作游戏图 block 并裁剪它们,但在保存前 17 张图像后,我开始遇到问题。第 18 个图像底部变黑,其余图像全黑。有什么想法吗? 这是我的代码。图片大小=(512,
我有一张图片。我想从图片的左边裁剪 10 像素,从右边裁剪 10 像素。我使用下面的代码来做到这一点 string oldImagePath="D:\\RD\\dotnet\\Images\\
在经典的模式框中,我有这样的东西: 采用这种风格: #container { position: absolute; overflow: hidden; } #closeBut
我有一个奇怪的菜单要编码,我不知道该怎么做,我有这个 100% 宽的标题条,左侧是 Logo ,右侧是菜单。 strip 越过全屏背景。所以,我需要事件链接以某种方式裁剪标题条,比如打个洞。我附上了一
我运行了 canny 函数,结果如下所示: 我想研究这两个区域的“线”,所以我想裁剪它们:像 顺便说一句,“线”可以在不同的位置:比如 我不是要代码,而是要我如何做的想法提前致谢 ! 最佳答案 缩小边
我是一名优秀的程序员,十分优秀!