Комментарии:
whatkind is your editor ?
Ответитьthis is 5th tutorial for making DL projrcts. in every project i get stuck b3cuase apparently they dojt let me use some function (gui ) in this case due to which half the thijgs doesn't make sense. why do th3y have to change and remove the old ones for gods sake....
Ответитьeverything is running perfect but the real time detection is messing up badly😭
Ответитьthe points are coming in different order. xmax then xmin , ymax ymin. HELP !!!!!!
ОтветитьI'm thinking of writing the code from scratch by writing line by line after listening to its function from the video
Is it a good idea and is this video helpful for that?
Thanks
Ответить@Nicholas Does it work good for multiple faces inside a single frame?
ОтветитьIs it necessary to create an environment?
ОтветитьI can't be the only one that thinks this guy looks like Giancarlo Esposito, right?
Ответитьi am having error in cV2.imread and cV2.imshow, how to oversome thi error
ОтветитьSir why we don't use open CV for face detection it may take few lines of code ? could you please clear it ?
Ответитьhow to make a deep fake image detector now?
ОтветитьAny idea why my loss functions will not decrease? Over 40 epochs my loss chart is just a flat jagged bar… followed everything to a T…
Ответитьhey nick, can plz help me to make this workable for multiple classes and objects
ОтветитьHelp me .
Ответитьwhere can i get the dataset could you please tell me
Ответитьon my jupyter enviroment i got this error when trying to install the libraries
ERROR: Could not find a version that satisfies the requirement tensorlow-gpu (from versions: none)
ERROR: No matching distribution found for tensorlow-gpu
if anyone can help me out
How can I use Transfer Learning on this to make it a happy/angry face Obvject Detection
ОтветитьIs it possible to train a custom DeepFace model for emotion recognition?
ОтветитьHello, I'm having a challenge pip installing tensorflow-gpu.
(tf-gpu) C:\Users\seasi>pip install --upgrade tensor-gpu
ERROR: Could not find a version that satisfies the requirement tensor-gpu (from versions: none)
ERROR: No matching distribution found for tensor-gpu.
This the error message i get.
I have explored chatgtb and the tensorflow site.
i'm using python 3.8.19 .
Thanks for the tutorial!
I've managed to go through all the steps and got quite nice model, but as some of the users here I've encountered some problems with Model training and saving.
1. ValueError: Cannot take the length of shape with unknown rank:
The problem is in loading labels (step 5.4). By default train_labels have Unknown tensor shape. We need to specify it manually (we need to do it for train_labels, test_labels, val_labels):
train_labels = train_labels.map(lambda x, y: (tf.reshape(x, [1]), tf.reshape(y, [4])))
2. Saving in step 11.2 is incorrect. The provided code saves `facetracker` model which is not the one we really trained (which is `model`). We need to save the `model` and call `model.predict` to get results. Also, .h5 format is deprecated now, so its recomended to use .keras fomat. So we need to call model.save('model.keras')
But thats not really enough, to serialize FaceTracker model properly we need to add `@tf.keras.utils.register_keras_serializable()` decorator and define two methods for correct model field serialization. In method get_config we need to serialize the model object inside FaceTracker class, and in method from_config we need to load it.
Note: I've changed the `model` field name to `eyetracker`
Hope it helps!
Full FaceTracker updated code:
@tf.keras.utils.register_keras_serializable()
class FaceTracker(Model):
def __init__(self, eyetracker, **kwargs):
super().__init__(**kwargs)
self.eyetracker = eyetracker
def get_config(self):
base_config = super().get_config()
config = {
"eyetracker": tf.keras.utils.serialize_keras_object(self.eyetracker),
}
return {**base_config, **config}
@classmethod
def from_config(cls, config):
sublayer_config = config.pop("eyetracker")
sublayer = tf.keras.utils.deserialize_keras_object(sublayer_config)
return cls(sublayer, **config)
def compile(self, opt, classloss, localizationloss, **kwargs):
super().compile(**kwargs)
self.closs = classloss
self.lloss = localizationloss
self.opt = opt
def train_step(self, batch, **kwargs):
X, y = batch
with tf.GradientTape() as tape:
classes, coords = self.eyetracker(X, training=True)
batch_classloss = self.closs(y[0], classes)
batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords)
total_loss = batch_localizationloss+0.5*batch_classloss
grad = tape.gradient(total_loss, self.eyetracker.trainable_variables)
opt.apply_gradients(zip(grad, self.eyetracker.trainable_variables))
return {"total_loss":total_loss, "class_loss":batch_classloss, "regress_loss":batch_localizationloss}
def test_step(self, batch, **kwargs):
X, y = batch
classes, coords = self.eyetracker(X, training=False)
batch_classloss = self.closs(tf.cast(y[0], tf.float32), classes)
batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords)
total_loss = batch_localizationloss+0.5*batch_classloss
return {"total_loss":total_loss, "class_loss":batch_classloss, "regress_loss":batch_localizationloss}
def call(self, X):
return self.eyetracker(X)
what's the accuracy of your model here ?
ОтветитьHello, thank you for your video it is very helpful for doing my final assignment. But I get an error when I run through this code in part 10.2:
hist = model.fit(train, epochs=10, validation_data=val, callbacks=[tensorboard_callback])
it returns:
ValueError: Cannot take the length of shape with unknown rank.
Can you help me with that?
Thank you
Do you have multiple class or labels used in face detection model? I want to be more specific such as name of a person that i want to use. Thank you
ОтветитьDon't exactly know why but loading the .h5 model in my case is taking soo much time.. its been 9 minutes and its still loading..
Did all of the above steps in COLAB and downloaded the .h5 file from the there and trying to use that on my local machine.. When I'm trying to load the model...is taking soo long... Can anyone help ??
Where to purchase this project??
ОтветитьHey, i wanted to ask if you have any papers, regarding your information. I am currently trying to create some simple face detection for a subject at univ. and your tutorial is awesome.
ОтветитьThis is awesome, Nick! How does the complexity increase if you want to identify multiple boxes in the image, and/or combine them together to identify empty space in the image. Is there a similar approach to identify things in say, rendered html... Is there an alternative to cv2 that can render and visualize html, like chromium?
ОтветитьHi nick can you build a special model similar to this one.
The GOAL: detecting the individual from the face image.
Description: 1-detect the face; 2-determine the individual from the dbs or folder of stored images.
Perfections step: determine date of the face detection or individual detection.
Nick what's the difference between this and using the tfod in your previous video .I am new to this deep learning so can you just tell the difference
ОтветитьI keep getting array[0], dtype=uint8), array([0. , 0., 0., 0.]) dtype=float16)) please why i need an answer quickly please
ОтветитьHello Nicholas, greate video mate i do have a question when following everystep you take the github code just does not work at 2 stages:
fig, ax = plt.subplots(ncols=4, figsize=(20,20))
for idx in range(4):
sample_image = res[0][idx]
sample_coords = res[1][1][idx]
cv2.rectangle(sample_image,
tuple(np.multiply(sample_coords[:2], [120,120]).astype(int)),
tuple(np.multiply(sample_coords[2:], [120,120]).astype(int)),
(255,0,0), 2)
ax[idx].imshow(sample_image)
(error is it does not show the pictures like in your video ) displayes nothing
and
class FaceTracker(Model):
def __init__(self, eyetracker, **kwargs):
super().__init__(**kwargs)
self.model = eyetracker
def compile(self, opt, classloss, localizationloss, **kwargs):
super().compile(**kwargs)
self.closs = classloss
self.lloss = localizationloss
self.opt = opt
def train_step(self, batch, **kwargs):
X, y = batch
with tf.GradientTape() as tape:
classes, coords = self.model(X, training=True)
batch_classloss = self.closs(y[0], classes)
batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords)
total_loss = batch_localizationloss+0.5*batch_classloss
grad = tape.gradient(total_loss, self.model.trainable_variables)
opt.apply_gradients(zip(grad, self.model.trainable_variables))
return {"total_loss":total_loss, "class_loss":batch_classloss, "regress_loss":batch_localizationloss}
def test_step(self, batch, **kwargs):
X, y = batch
classes, coords = self.model(X, training=False)
batch_classloss = self.closs(y[0], classes)
batch_localizationloss = self.lloss(tf.cast(y[1], tf.float32), coords)
total_loss = batch_localizationloss+0.5*batch_classloss
return {"total_loss":total_loss, "class_loss":batch_classloss, "regress_loss":batch_localizationloss}
def call(self, X, **kwargs):
return self.model(X, **kwargs)
model = FaceTracker(facetracker)
model.compile(opt, classloss, regressloss)
10.2 Train
logdir='logs'
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
hist = model.fit(train, epochs=10, validation_data=val, callbacks=[tensorboard_callback])
does not wanna train at all error about Cannot take the length of Shape and what ever i do it wont fix the problem
even if i check the path check coords check data make sure everything is well labeld even good mix of images with face and non face.
is it possible for you to update the github code to addres these problems or maybe a new guide ??
nick what if i have dataset of different faces with different labels, how can i print those labels instead of just 'face'
Ответитьthank you!! it helped a lot to build face detector from stratch and i've learned tons of things from you, really appericiated <3
Ответитьi want to detect a object which as similar properties and large in number in singe frame
Ответитьfor the new coder watching video in 2024 tensorflow-gpu library has been disabled directly install tensorflow and opencv
ОтветитьHello, Can I implement this code in an project I am making, what are the licensing terms? It would great help!!
ОтветитьI like the video, explanations are clear and to the point. But I've to admit, I like it when people type out the code from scratch instead of reviewing code. Thanks for the tutorial.
ОтветитьI wonder is it possible to use ImageDataGenerator from TensorFlow Keras? and if possible how to deal with the coordinate 🤔
Ответить