- html - 出于某种原因,IE8 对我的 Sass 文件中继承的 html5 CSS 不友好?
- JMeter 在响应断言中使用 span 标签的问题
- html - 在 :hover and :active? 上具有不同效果的 CSS 动画
- html - 相对于居中的 html 内容固定的 CSS 重复背景?
我有一张有脸的照片。
我有狂欢节面具:
使用此功能,我可以检测人脸:
let ciImage = CIImage(cgImage: photo)
let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)!
let faces = faceDetector.features(in: ciImage)
if let face = faces.first as? CIFaceFeature {
}
最佳答案
我可能会尝试这种方法:
获取 leftEyePosition、rightEyePosition 和 faceAngle 值。 (CIFaceFeature 的所有部分)
计算左眼和右眼之间的距离。
这是有关如何计算距离的链接:https://www.hackingwithswift.com/example-code/core-graphics/how-to-calculate-the-distance-between-two-cgpoints
使用蒙版的原始尺寸以及到其中一只眼睛中心的 x 和 y 距离创建常量。
根据眼睛的距离,您可以按比例计算 mask 的新宽度。
这应该会给你一个尺寸合适的 mask 。还以相同的方式计算到面具的一只眼睛的中心的新 x 和 y 距离。
再次按比例调整所有值以适合屏幕上的最终预期尺寸。
使用眼睛的坐标将蒙版放在照片上,由蒙版眼睛到角落的距离抵消。
使用 faceAngle 值旋转蒙版。
在将蒙版导入项目之前,将其转换为具有透明背景的png,去除白色背景。您可以在代码中做到这一点,但这需要大量的工作,而且根据掩码源文件的不同,结果可能也不会如此。
更新,我已经尝试过我的解决方案。这是一个简单的 iOS 单屏应用程序,只需将代码复制到 ViewController.swift 文件中,将您的蒙版作为 png 和一张人脸照片作为 photo.jpg 添加到项目中,它应该可以工作。
如果您想尝试,这里有一个指向您的 png 照片的链接:
QPTF1.png
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
imageMethod()
}
func imageMethod() {
let uiMaskImage = UIImage(named: "QPTF1.png") //converted to png with transperancy before adding to the project
let maskOriginalWidth = CGFloat(exactly: 655.0)!
let maskOriginalHeight = CGFloat(exactly: 364.0)!
let maskOriginalEyeDistance = CGFloat(exactly: 230.0)! //increase or decrease value to change the final size of the mask
let maskOriginalLeftEyePossitionX = CGFloat(exactly: 203.0)! //increase or decrease to fine tune mask possition on x axis
let maskOriginalLeftEyePossitionY = CGFloat(exactly: 200.0)! //increase or decrease to fine tune mask possition on y axis
//This code assumes the image AND face orientation is always matching the same orientation!
//The code needs to be adjusted for other cases using UIImage.Orientation to get the orientation and adjusts the coordinates accordingly.
//CIDetector might also not detect faces which don't have the same orientation as the photo. Try to use CIDetectorImageOrientation to look for other orientations of no face has been detected.
//Also you might want to use other orientation points and scale values (right eye, nose etc.) in case the left eye, and left to right eye distance is not available.
//Also this code is very wordy, pretty sure it can be cut down to half the size and made simpler on many places.
let uiImageFace = UIImage(named: "photo.jpg")
let ciImageFace = CIImage(image: uiImageFace!)
let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)!
let faces = faceDetector.features(in: ciImageFace!)
if let face = faces.first as? CIFaceFeature {
/*
Getting the distances and angle based on the original photo
*/
let faceAngle = face.faceAngle
let rotationAngle = CGFloat(faceAngle * .pi / 180.0)
//The distance in between the eyes of the original photo.
let originalFaceEyeDistance = CGPointDistance(from: face.leftEyePosition, to: face.rightEyePosition)
/*
Adjusting the mask and its eye coordinates to fit the original photo.
*/
//Setting the scale mask : original.
let eyeDistanceScale = maskOriginalEyeDistance / originalFaceEyeDistance
//The new dimensions of the mask.
let newMaskWidth = maskOriginalWidth/eyeDistanceScale
let newMaskHeight = maskOriginalHeight/eyeDistanceScale
//The new mask coordinates of the left eye in relation to the original photo.
let newMaskLeftEyePossitionX = maskOriginalLeftEyePossitionX / eyeDistanceScale
let newMaskLeftEyePossitionY = maskOriginalLeftEyePossitionY / eyeDistanceScale
/*
Adjusting the size values to fit the desired final size on the screen.
*/
//Using the width of the screen to calculate the new scale.
let screenScale = uiImageFace!.size.width / view.frame.width
//The new final dimensions of the mask
let scaledToScreenMaskWidth = newMaskWidth / screenScale
let scaledToScreenMaskHeight = newMaskHeight / screenScale
//The new final dimensions of the photo.
let scaledToScreenPhotoHeight = uiImageFace!.size.height / screenScale
let scaledToScreenPhotoWidth = uiImageFace!.size.width / screenScale
//The new eye coordinates of the photo.
let scaledToScreenLeftEyeFacePositionX = face.leftEyePosition.x / screenScale
let scaledToScreenLeftEyeFacePositionY = (uiImageFace!.size.height - face.leftEyePosition.y) / screenScale //reversing the y direction
//The new eye to corner distance of the mask
let scaledToScreenMaskLeftEyeX = newMaskLeftEyePossitionX / screenScale
let scaledToScreenMaskLeftEyeY = newMaskLeftEyePossitionY / screenScale
//The final coordinates for the mask
let adjustedMaskLeftEyeX = scaledToScreenLeftEyeFacePositionX - scaledToScreenMaskLeftEyeX
let adjustedMaskLeftEyeY = scaledToScreenLeftEyeFacePositionY - scaledToScreenMaskLeftEyeY
/*
Showing the image on the screen.
*/
let baseImageView = UIImageView(image: uiImageFace!)
//If x and y is not 0, the mask x and y need to be adjusted too.
baseImageView.frame = CGRect(x: CGFloat(exactly: 0.0)!, y: CGFloat(exactly: 0.0)!, width: scaledToScreenPhotoWidth, height: scaledToScreenPhotoHeight)
view.addSubview(baseImageView)
let maskImageView = UIImageView(image: uiMaskImage!)
maskImageView.frame = CGRect(x: adjustedMaskLeftEyeX, y: adjustedMaskLeftEyeY, width: scaledToScreenMaskWidth, height: scaledToScreenMaskHeight)
maskImageView.transform = CGAffineTransform(rotationAngle: rotationAngle)
view.addSubview(maskImageView)
}
}
func CGPointDistanceSquared(from: CGPoint, to: CGPoint) -> CGFloat {
return (from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y)
}
func CGPointDistance(from: CGPoint, to: CGPoint) -> CGFloat {
return sqrt(CGPointDistanceSquared(from: from, to: to))
}
}
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
imageMethod()
}
func imageMethod() {
struct coords {
let coord: (x: Int, y: Int)
let size: Int
}
let uiMaskImage = UIImage(named: "QPTF1.png") //converted to png with transperancy before adding to the project
let uiMaskImage2 = UIImage(named: "QPTF1.png")
let ciMaskImage2 = CIImage(image: uiMaskImage2!)
let context = CIContext(options: nil)
let cgMaskImage = context.createCGImage(ciMaskImage2!, from: ciMaskImage2!.extent)
let pixelData = cgMaskImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let alphaLevel: CGFloat = 0.0 //0.0 - 1.0 set higher to allow images with partially transparent eyes, like sunglasses.
var possibleEyes: [coords] = []
let frame = 10
var detailLevel = 6
let sizeX = Int((uiMaskImage?.size.width)!)
let sizeY = Int((uiMaskImage?.size.height)!)
var points: [(x: Int, y: Int)] = []
var pointA_X = sizeX / 4
var pointA_Y = sizeY / 4
var pointB_X = sizeX / 4
var pointB_Y = sizeY * 3 / 4
var pointC_X = sizeX * 3 / 4
var pointC_Y = sizeY / 4
var pointD_X = sizeX * 3 / 4
var pointD_Y = sizeY * 3 / 4
var nextXsmaller = pointA_X / 2
var nextYsmaller = pointA_Y / 2
points.append((x: pointA_X, y: pointA_Y))
points.append((x: pointB_X, y: pointB_Y))
points.append((x: pointC_X, y: pointC_Y))
points.append((x: pointD_X, y: pointD_Y))
func transparentArea(_ x: Int, _ y: Int) -> Bool {
let pos = CGPoint(x: x, y: y)
let pixelInfo: Int = ((Int(uiMaskImage2!.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
if a <= alphaLevel {
return true
} else {
return false
}
}
func createPoints(point: (x: Int, y: Int)) {
pointA_X = point.x - nextXsmaller
pointA_Y = point.y - nextYsmaller
pointB_X = point.x - nextXsmaller
pointB_Y = point.y + nextYsmaller
pointC_X = point.x + nextXsmaller
pointC_Y = point.y - nextYsmaller
pointD_X = point.x + nextXsmaller
pointD_Y = point.y + nextYsmaller
points.append((x: pointA_X, y: pointA_Y))
points.append((x: pointB_X, y: pointB_Y))
points.append((x: pointC_X, y: pointC_Y))
points.append((x: pointD_X, y: pointD_Y))
}
func checkSides(point: (x: Int, y: Int)) {
var xNeg = (val: 0, end: false)
var xPos = (val: 0, end: false)
var yNeg = (val: 0, end: false)
var yPos = (val: 0, end: false)
if transparentArea(point.x, point.y) {
xNeg.val = point.x
xPos.val = point.x
yNeg.val = point.y
yPos.val = point.y
while true {
if transparentArea(xNeg.val, point.y) {
xNeg.val -= 1
if xNeg.val <= frame {
break
}
} else {
xNeg.end = true
}
if transparentArea(xPos.val, point.y) {
xPos.val += 1
if xPos.val >= sizeX-frame {
break
}
} else {
xPos.end = true
}
if transparentArea(point.x, yNeg.val) {
yNeg.val -= 1
if yNeg.val <= frame {
break
}
} else {
yNeg.end = true
}
if transparentArea(point.x, yPos.val) {
yPos.val += 1
if yPos.val >= sizeY-frame {
break
}
} else {
yPos.end = true
}
if xNeg.end && xPos.end && yNeg.end && yPos.end {
let newEyes = coords(coord: (point.x, point.y), size: (xPos.val - xNeg.val) * (yPos.val - yNeg.val) )
possibleEyes.append(newEyes)
break
}
}
}
}
while detailLevel > 0 {
print("Run: \(detailLevel)")
for (index, point) in points.enumerated().reversed() {
//checking if the point is inside of an transparent area
checkSides(point: point)
points.remove(at: index)
if detailLevel > 1 {
createPoints(point: point)
}
}
detailLevel -= 1
nextXsmaller = nextXsmaller / 2
nextYsmaller = nextYsmaller / 2
}
possibleEyes.sort { $0.coord.x > $1.coord.x }
var rightEyes = possibleEyes[0...possibleEyes.count/2]
var leftEyes = possibleEyes[possibleEyes.count/2..<possibleEyes.count]
leftEyes.sort { $0.size > $1.size }
rightEyes.sort { $0.size > $1.size }
leftEyes = leftEyes.dropLast(Int(Double(leftEyes.count) * 0.01))
rightEyes = rightEyes.dropLast(Int(Double(leftEyes.count) * 0.01))
let sumXleft = ( leftEyes.reduce(0) { $0 + $1.coord.x} ) / leftEyes.count
let sumYleft = ( leftEyes.reduce(0) { $0 + $1.coord.y} ) / leftEyes.count
let sumXright = ( rightEyes.reduce(0) { $0 + $1.coord.x} ) / rightEyes.count
let sumYright = ( rightEyes.reduce(0) { $0 + $1.coord.y} ) / rightEyes.count
let maskOriginalWidth = CGFloat(exactly: sizeX)!
let maskOriginalHeight = CGFloat(exactly: sizeY)!
let maskOriginalLeftEyePossitionX = CGFloat(exactly: sumXleft)!
let maskOriginalLeftEyePossitionY = CGFloat(exactly: sumYleft)!
let maskOriginalEyeDistance = CGPointDistance(from: CGPoint(x: sumXright, y: sumYright), to: CGPoint(x: sumXleft, y: sumYleft))
//This code assumes the image AND face orientation is always matching the same orientation!
//The code needs to be adjusted for other cases using UIImage.Orientation to get the orientation and adjusts the coordinates accordingly.
//CIDetector might also not detect faces which don't have the same orientation as the photo. Try to use CIDetectorImageOrientation to look for other orientations of no face has been detected.
//Also you might want to use other orientation points and scale values (right eye, nose etc.) in case the left eye, and left to right eye distance is not available.
//Also this code is very wordy, pretty sure it can be cut down to half the size and made simpler on many places.
let uiImageFace = UIImage(named: "photo3.jpg")
let ciImageFace = CIImage(image: uiImageFace!)
let options = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: options)!
let faces = faceDetector.features(in: ciImageFace!)
if let face = faces.first as? CIFaceFeature {
/*
Getting the distances and angle based on the original photo
*/
let faceAngle = face.faceAngle
let rotationAngle = CGFloat(faceAngle * .pi / 180.0)
//The distance in between the eyes of the original photo.
let originalFaceEyeDistance = CGPointDistance(from: face.leftEyePosition, to: face.rightEyePosition)
/*
Adjusting the mask and its eye coordinates to fit the original photo.
*/
//Setting the scale mask : original.
let eyeDistanceScale = maskOriginalEyeDistance / originalFaceEyeDistance
//The new dimensions of the mask.
let newMaskWidth = maskOriginalWidth/eyeDistanceScale
let newMaskHeight = maskOriginalHeight/eyeDistanceScale
//The new mask coordinates of the left eye in relation to the original photo.
let newMaskLeftEyePossitionX = maskOriginalLeftEyePossitionX / eyeDistanceScale
let newMaskLeftEyePossitionY = maskOriginalLeftEyePossitionY / eyeDistanceScale
/*
Adjusting the size values to fit the desired final size on the screen.
*/
//Using the width of the screen to calculate the new scale.
let screenScale = uiImageFace!.size.width / view.frame.width
//The new final dimensions of the mask
let scaledToScreenMaskWidth = newMaskWidth / screenScale
let scaledToScreenMaskHeight = newMaskHeight / screenScale
//The new final dimensions of the photo.
let scaledToScreenPhotoHeight = uiImageFace!.size.height / screenScale
let scaledToScreenPhotoWidth = uiImageFace!.size.width / screenScale
//The new eye coordinates of the photo.
let scaledToScreenLeftEyeFacePositionX = face.leftEyePosition.x / screenScale
let scaledToScreenLeftEyeFacePositionY = (uiImageFace!.size.height - face.leftEyePosition.y) / screenScale //reversing the y direction
//The new eye to corner distance of the mask
let scaledToScreenMaskLeftEyeX = newMaskLeftEyePossitionX / screenScale
let scaledToScreenMaskLeftEyeY = newMaskLeftEyePossitionY / screenScale
//The final coordinates for the mask
let adjustedMaskLeftEyeX = scaledToScreenLeftEyeFacePositionX - scaledToScreenMaskLeftEyeX
let adjustedMaskLeftEyeY = scaledToScreenLeftEyeFacePositionY - scaledToScreenMaskLeftEyeY
/*
Showing the image on the screen.
*/
let baseImageView = UIImageView(image: uiImageFace!)
//If x and y is not 0, the mask x and y need to be adjusted too.
baseImageView.frame = CGRect(x: CGFloat(exactly: 0.0)!, y: CGFloat(exactly: 0.0)!, width: scaledToScreenPhotoWidth, height: scaledToScreenPhotoHeight)
view.addSubview(baseImageView)
let maskImageView = UIImageView(image: uiMaskImage!)
maskImageView.frame = CGRect(x: adjustedMaskLeftEyeX, y: adjustedMaskLeftEyeY, width: scaledToScreenMaskWidth, height: scaledToScreenMaskHeight)
maskImageView.transform = CGAffineTransform(rotationAngle: rotationAngle)
view.addSubview(maskImageView)
}
}
func CGPointDistanceSquared(from: CGPoint, to: CGPoint) -> CGFloat {
return (from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y)
}
func CGPointDistance(from: CGPoint, to: CGPoint) -> CGFloat {
return sqrt(CGPointDistanceSquared(from: from, to: to))
}
}
关于Swift - 为包含人脸的照片添加狂欢节面具,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/61688322/
我有一个类似于以下的结构。 class A { string title; List bItem; } class B { int pric
本地流 和 远程流 两者都是“媒体流列表 ”。 本地流 包含“本地媒体流 ” 对象 但是,远程流 包含“媒体流 ” 对象 为什么差别这么大? 当我使用“本地流 “- 这个对我有用: localVide
我正在尝试将 8 列虚拟变量转换为 8 级排名的一列。 我试图用这个公式来做到这一点: =IF(OR(A1="1");"1";IF(OR(B1="1");"2";IF(OR(C1="1");"3";I
我正在使用面向对象编程在 Python 中创建一个有点复杂的棋盘游戏的实现。 我的问题是,许多这些对象应该能够与其他对象交互,即使它们不包含在其中。 例如Game是一个对象,其中包含PointTrac
有没有办法获取与 contains 语句匹配的最深元素? 基本上,如果我有嵌套的 div,我想要最后一个元素而不是父元素: Needle $("div:contains('Needle')")
出于某种原因,我无法在 Google 上找到答案!但是使用 SQL contains 函数我怎么能告诉它从字符串的开头开始,即我正在寻找等同于的全文 喜欢 'some_term%'。 我知道我可以使用
我正在尝试创建一个正则表达式来匹配具有 3 个或更多元音的字符串。 我试过这个: [aeiou]{3,} 但它仅在元音按顺序排列时才有效。有什么建议吗? 例如: 塞缪尔 -> 有效 琼 -> 无效 S
嘿所以我遇到了这样的情况,我从数据库中拉回一个客户,并通过包含的方式包含所有案例研究 return (from c in db.Clients.Include("CaseStudies")
如果关键字是子字符串,我无法弄清楚为什么这个函数不返回结果。 const string = 'cake'; const substring = 'cak'; console.log(string.in
我正在尝试将包含特定文本字符串的任何元素更改为红色。在我的示例中,我可以将子元素变为蓝色,但是我编写“替换我”行的方式有些不正确;红色不会发生变化。我注意到“contains”方法通常写为 :cont
我想问一下我是否可以要求/包含一个语法错误的文件,如果不能,则require/include返回一个值,这样我就知道所需/包含的文件存在语法错误并且不能被要求/包含? file.php语法错误 inc
我想为所有包含youtube链接的链接添加一个rel。 这就是我正在使用的东西-但它没有用。有任何想法吗? $('a [href:contains(“youtube.com”)]')。attr('re
我正在尝试在 Elasticsearch 中查询。除搜索中出现“/”外,此功能均正常运行。查询如下所示 GET styling_rules/product_line_filters/_search {
我正在开发名为eBookRepository的ASP.NET MVC应用程序,其中包含在线图书。 电子书具有自己的标题,作者等。因此,现在我正在尝试实现搜索机制。我必须使用Elasticsearch作
我已阅读Firebase Documentation并且不明白什么是 .contains()。 以下是文档中 Firebase 数据库的示例规则: { "rules": { "rooms"
我的问题是我可以给出条件[ 'BookTitleMaster.id' => $xtitid, ] 如下所示 $bbookinfs = $this->BookStockin->BookIssue->fi
我需要能够使用 | 检查模式在他们中。例如,对于像“dtest|test”这样的字符串,像 d*|*t 这样的表达式应该返回 true。 我不是正则表达式英雄,所以我只是尝试了一些事情,例如: Reg
我想创建一个正则表达式来不匹配某些单词... 我的字符:var test = "é123rr;and;ià456;or;456543" 我的正则表达式:test.match(\((?!and)(?!o
我在 XSLT 中有一个名为 variable_name 的变量,如果相关产品具有名称为 A 或 B 或两者均为 A & 的属性,我将尝试将其设置为 1 B.
您好,我想让接待员和经理能够查看工作类型和费率并随后进行更新。但是技术人员只能查看不能更新。该图是否有效? 我读到扩展用例是由发起基本用例的参与者发起的。我应该如何区分技术人员只能启动基本案例而不能启
我是一名优秀的程序员,十分优秀!